2026-03-05 00:00:13.425849 | Job console starting 2026-03-05 00:00:13.451641 | Updating git repos 2026-03-05 00:00:13.836753 | Cloning repos into workspace 2026-03-05 00:00:14.400724 | Restoring repo states 2026-03-05 00:00:14.475965 | Merging changes 2026-03-05 00:00:14.475988 | Checking out repos 2026-03-05 00:00:15.317131 | Preparing playbooks 2026-03-05 00:00:17.094596 | Running Ansible setup 2026-03-05 00:00:24.501389 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-05 00:00:26.386626 | 2026-03-05 00:00:26.386788 | PLAY [Base pre] 2026-03-05 00:00:26.446385 | 2026-03-05 00:00:26.446520 | TASK [Setup log path fact] 2026-03-05 00:00:26.530383 | orchestrator | ok 2026-03-05 00:00:26.574015 | 2026-03-05 00:00:26.574158 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-05 00:00:26.624144 | orchestrator | ok 2026-03-05 00:00:26.657003 | 2026-03-05 00:00:26.657123 | TASK [emit-job-header : Print job information] 2026-03-05 00:00:26.730701 | # Job Information 2026-03-05 00:00:26.730879 | Ansible Version: 2.16.14 2026-03-05 00:00:26.730916 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-05 00:00:26.730951 | Pipeline: periodic-midnight 2026-03-05 00:00:26.730974 | Executor: 521e9411259a 2026-03-05 00:00:26.730995 | Triggered by: https://github.com/osism/testbed 2026-03-05 00:00:26.731017 | Event ID: 4d3fa243eeba40b18dad4451eb586835 2026-03-05 00:00:26.737540 | 2026-03-05 00:00:26.737644 | LOOP [emit-job-header : Print node information] 2026-03-05 00:00:26.920991 | orchestrator | ok: 2026-03-05 00:00:26.921238 | orchestrator | # Node Information 2026-03-05 00:00:26.921275 | orchestrator | Inventory Hostname: orchestrator 2026-03-05 00:00:26.921300 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-05 00:00:26.921322 | orchestrator | Username: zuul-testbed03 2026-03-05 00:00:26.921343 | orchestrator | Distro: Debian 12.13 2026-03-05 00:00:26.921366 | orchestrator | Provider: static-testbed 2026-03-05 00:00:26.921387 | orchestrator | Region: 2026-03-05 00:00:26.921408 | orchestrator | Label: testbed-orchestrator 2026-03-05 00:00:26.921428 | orchestrator | Product Name: OpenStack Nova 2026-03-05 00:00:26.921448 | orchestrator | Interface IP: 81.163.193.140 2026-03-05 00:00:26.939993 | 2026-03-05 00:00:26.940110 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-05 00:00:27.842298 | orchestrator -> localhost | changed 2026-03-05 00:00:27.848866 | 2026-03-05 00:00:27.848961 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-05 00:00:30.756663 | orchestrator -> localhost | changed 2026-03-05 00:00:30.773164 | 2026-03-05 00:00:30.773256 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-05 00:00:31.431199 | orchestrator -> localhost | ok 2026-03-05 00:00:31.437035 | 2026-03-05 00:00:31.437129 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-05 00:00:31.496450 | orchestrator | ok 2026-03-05 00:00:31.551990 | orchestrator | included: /var/lib/zuul/builds/8dd41a9c45fa457fb0856736771f2ffb/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-05 00:00:31.576826 | 2026-03-05 00:00:31.576921 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-05 00:00:34.735801 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-05 00:00:34.736055 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8dd41a9c45fa457fb0856736771f2ffb/work/8dd41a9c45fa457fb0856736771f2ffb_id_rsa 2026-03-05 00:00:34.736094 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8dd41a9c45fa457fb0856736771f2ffb/work/8dd41a9c45fa457fb0856736771f2ffb_id_rsa.pub 2026-03-05 00:00:34.736116 | orchestrator -> localhost | The key fingerprint is: 2026-03-05 00:00:34.736136 | orchestrator -> localhost | SHA256:jnKDJrP66nezWZ9MagXLk3z+8BvPJAyRM6TINAW7Qt8 zuul-build-sshkey 2026-03-05 00:00:34.736155 | orchestrator -> localhost | The key's randomart image is: 2026-03-05 00:00:34.736184 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-05 00:00:34.736203 | orchestrator -> localhost | | +o. . | 2026-03-05 00:00:34.736348 | orchestrator -> localhost | | o + o . | 2026-03-05 00:00:34.736377 | orchestrator -> localhost | | . + . = | 2026-03-05 00:00:34.736396 | orchestrator -> localhost | | . . o. + | 2026-03-05 00:00:34.736413 | orchestrator -> localhost | | . ooES. | 2026-03-05 00:00:34.736437 | orchestrator -> localhost | | .. O oo | 2026-03-05 00:00:34.736454 | orchestrator -> localhost | | o + +.*o + . | 2026-03-05 00:00:34.736471 | orchestrator -> localhost | | * =oo=.+ * | 2026-03-05 00:00:34.736488 | orchestrator -> localhost | |+=+ .o+. +.+.o | 2026-03-05 00:00:34.736505 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-05 00:00:34.736582 | orchestrator -> localhost | ok: Runtime: 0:00:01.924013 2026-03-05 00:00:34.744103 | 2026-03-05 00:00:34.744230 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-05 00:00:34.840191 | orchestrator | ok 2026-03-05 00:00:34.849437 | orchestrator | included: /var/lib/zuul/builds/8dd41a9c45fa457fb0856736771f2ffb/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-05 00:00:34.868730 | 2026-03-05 00:00:34.868823 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-05 00:00:34.904438 | orchestrator | skipping: Conditional result was False 2026-03-05 00:00:34.911098 | 2026-03-05 00:00:34.911187 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-05 00:00:35.936073 | orchestrator | changed 2026-03-05 00:00:35.947151 | 2026-03-05 00:00:35.947261 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-05 00:00:36.228491 | orchestrator | ok 2026-03-05 00:00:36.235956 | 2026-03-05 00:00:36.236045 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-05 00:00:36.720607 | orchestrator | ok 2026-03-05 00:00:36.731554 | 2026-03-05 00:00:36.731661 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-05 00:00:37.135602 | orchestrator | ok 2026-03-05 00:00:37.140529 | 2026-03-05 00:00:37.140606 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-05 00:00:37.190347 | orchestrator | skipping: Conditional result was False 2026-03-05 00:00:37.196035 | 2026-03-05 00:00:37.196123 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-05 00:00:38.490967 | orchestrator -> localhost | changed 2026-03-05 00:00:38.512089 | 2026-03-05 00:00:38.512194 | TASK [add-build-sshkey : Add back temp key] 2026-03-05 00:00:39.478692 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8dd41a9c45fa457fb0856736771f2ffb/work/8dd41a9c45fa457fb0856736771f2ffb_id_rsa (zuul-build-sshkey) 2026-03-05 00:00:39.478907 | orchestrator -> localhost | ok: Runtime: 0:00:00.023703 2026-03-05 00:00:39.489965 | 2026-03-05 00:00:39.490203 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-05 00:00:40.242975 | orchestrator | ok 2026-03-05 00:00:40.247805 | 2026-03-05 00:00:40.247888 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-05 00:00:40.305182 | orchestrator | skipping: Conditional result was False 2026-03-05 00:00:40.412799 | 2026-03-05 00:00:40.412901 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-05 00:00:40.929765 | orchestrator | ok 2026-03-05 00:00:40.975066 | 2026-03-05 00:00:40.975195 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-05 00:00:41.024833 | orchestrator | ok 2026-03-05 00:00:41.032667 | 2026-03-05 00:00:41.032752 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-05 00:00:41.776346 | orchestrator -> localhost | ok 2026-03-05 00:00:41.782547 | 2026-03-05 00:00:41.782663 | TASK [validate-host : Collect information about the host] 2026-03-05 00:00:43.659226 | orchestrator | ok 2026-03-05 00:00:43.685485 | 2026-03-05 00:00:43.685607 | TASK [validate-host : Sanitize hostname] 2026-03-05 00:00:43.792906 | orchestrator | ok 2026-03-05 00:00:43.797283 | 2026-03-05 00:00:43.797368 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-05 00:00:44.963740 | orchestrator -> localhost | changed 2026-03-05 00:00:44.968791 | 2026-03-05 00:00:44.968875 | TASK [validate-host : Collect information about zuul worker] 2026-03-05 00:00:45.747902 | orchestrator | ok 2026-03-05 00:00:45.752244 | 2026-03-05 00:00:45.752342 | TASK [validate-host : Write out all zuul information for each host] 2026-03-05 00:00:46.894261 | orchestrator -> localhost | changed 2026-03-05 00:00:46.904202 | 2026-03-05 00:00:46.904938 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-05 00:00:47.243775 | orchestrator | ok 2026-03-05 00:00:47.249724 | 2026-03-05 00:00:47.249815 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-05 00:01:59.040653 | orchestrator | changed: 2026-03-05 00:01:59.042246 | orchestrator | .d..t...... src/ 2026-03-05 00:01:59.042309 | orchestrator | .d..t...... src/github.com/ 2026-03-05 00:01:59.042336 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-05 00:01:59.042358 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-05 00:01:59.042380 | orchestrator | RedHat.yml 2026-03-05 00:01:59.057418 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-05 00:01:59.057436 | orchestrator | RedHat.yml 2026-03-05 00:01:59.057488 | orchestrator | = 1.53.0"... 2026-03-05 00:02:11.373998 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-05 00:02:11.396131 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-05 00:02:11.560879 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-05 00:02:12.025183 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-05 00:02:12.101472 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-05 00:02:12.805225 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-05 00:02:12.881491 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-05 00:02:13.413847 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-05 00:02:13.413921 | orchestrator | 2026-03-05 00:02:13.413928 | orchestrator | Providers are signed by their developers. 2026-03-05 00:02:13.413934 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-05 00:02:13.413962 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-05 00:02:13.413997 | orchestrator | 2026-03-05 00:02:13.414002 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-05 00:02:13.414007 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-05 00:02:13.414036 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-05 00:02:13.414049 | orchestrator | you run "tofu init" in the future. 2026-03-05 00:02:13.414429 | orchestrator | 2026-03-05 00:02:13.414469 | orchestrator | OpenTofu has been successfully initialized! 2026-03-05 00:02:13.414492 | orchestrator | 2026-03-05 00:02:13.414497 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-05 00:02:13.414501 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-05 00:02:13.414505 | orchestrator | should now work. 2026-03-05 00:02:13.414509 | orchestrator | 2026-03-05 00:02:13.414513 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-05 00:02:13.414517 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-05 00:02:13.414528 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-05 00:02:13.567220 | orchestrator | Created and switched to workspace "ci"! 2026-03-05 00:02:13.567275 | orchestrator | 2026-03-05 00:02:13.567282 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-05 00:02:13.567288 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-05 00:02:13.567292 | orchestrator | for this configuration. 2026-03-05 00:02:13.680623 | orchestrator | ci.auto.tfvars 2026-03-05 00:02:13.706077 | orchestrator | default_custom.tf 2026-03-05 00:02:14.635190 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-05 00:02:15.143400 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-05 00:02:15.372086 | orchestrator | 2026-03-05 00:02:15.372162 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-05 00:02:15.372174 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-05 00:02:15.372183 | orchestrator | + create 2026-03-05 00:02:15.372191 | orchestrator | <= read (data resources) 2026-03-05 00:02:15.372199 | orchestrator | 2026-03-05 00:02:15.372206 | orchestrator | OpenTofu will perform the following actions: 2026-03-05 00:02:15.372220 | orchestrator | 2026-03-05 00:02:15.372228 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-05 00:02:15.372235 | orchestrator | # (config refers to values not yet known) 2026-03-05 00:02:15.372242 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-05 00:02:15.372250 | orchestrator | + checksum = (known after apply) 2026-03-05 00:02:15.372257 | orchestrator | + created_at = (known after apply) 2026-03-05 00:02:15.372265 | orchestrator | + file = (known after apply) 2026-03-05 00:02:15.372272 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.372299 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.372306 | orchestrator | + min_disk_gb = (known after apply) 2026-03-05 00:02:15.372314 | orchestrator | + min_ram_mb = (known after apply) 2026-03-05 00:02:15.372320 | orchestrator | + most_recent = true 2026-03-05 00:02:15.372327 | orchestrator | + name = (known after apply) 2026-03-05 00:02:15.372334 | orchestrator | + protected = (known after apply) 2026-03-05 00:02:15.372341 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.372351 | orchestrator | + schema = (known after apply) 2026-03-05 00:02:15.372358 | orchestrator | + size_bytes = (known after apply) 2026-03-05 00:02:15.372365 | orchestrator | + tags = (known after apply) 2026-03-05 00:02:15.372372 | orchestrator | + updated_at = (known after apply) 2026-03-05 00:02:15.372379 | orchestrator | } 2026-03-05 00:02:15.372389 | orchestrator | 2026-03-05 00:02:15.372397 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-05 00:02:15.372403 | orchestrator | # (config refers to values not yet known) 2026-03-05 00:02:15.372410 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-05 00:02:15.372417 | orchestrator | + checksum = (known after apply) 2026-03-05 00:02:15.372423 | orchestrator | + created_at = (known after apply) 2026-03-05 00:02:15.372430 | orchestrator | + file = (known after apply) 2026-03-05 00:02:15.372437 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.372443 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.372450 | orchestrator | + min_disk_gb = (known after apply) 2026-03-05 00:02:15.372457 | orchestrator | + min_ram_mb = (known after apply) 2026-03-05 00:02:15.372464 | orchestrator | + most_recent = true 2026-03-05 00:02:15.372471 | orchestrator | + name = (known after apply) 2026-03-05 00:02:15.372477 | orchestrator | + protected = (known after apply) 2026-03-05 00:02:15.372484 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.372490 | orchestrator | + schema = (known after apply) 2026-03-05 00:02:15.372497 | orchestrator | + size_bytes = (known after apply) 2026-03-05 00:02:15.372504 | orchestrator | + tags = (known after apply) 2026-03-05 00:02:15.372510 | orchestrator | + updated_at = (known after apply) 2026-03-05 00:02:15.372517 | orchestrator | } 2026-03-05 00:02:15.372524 | orchestrator | 2026-03-05 00:02:15.372531 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-05 00:02:15.372538 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-05 00:02:15.372545 | orchestrator | + content = (known after apply) 2026-03-05 00:02:15.372552 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-05 00:02:15.372559 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-05 00:02:15.372566 | orchestrator | + content_md5 = (known after apply) 2026-03-05 00:02:15.372573 | orchestrator | + content_sha1 = (known after apply) 2026-03-05 00:02:15.372579 | orchestrator | + content_sha256 = (known after apply) 2026-03-05 00:02:15.372586 | orchestrator | + content_sha512 = (known after apply) 2026-03-05 00:02:15.372593 | orchestrator | + directory_permission = "0777" 2026-03-05 00:02:15.372600 | orchestrator | + file_permission = "0644" 2026-03-05 00:02:15.372606 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-05 00:02:15.372613 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.372620 | orchestrator | } 2026-03-05 00:02:15.372630 | orchestrator | 2026-03-05 00:02:15.372636 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-05 00:02:15.372643 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-05 00:02:15.372650 | orchestrator | + content = (known after apply) 2026-03-05 00:02:15.372657 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-05 00:02:15.372664 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-05 00:02:15.372671 | orchestrator | + content_md5 = (known after apply) 2026-03-05 00:02:15.372678 | orchestrator | + content_sha1 = (known after apply) 2026-03-05 00:02:15.372684 | orchestrator | + content_sha256 = (known after apply) 2026-03-05 00:02:15.372691 | orchestrator | + content_sha512 = (known after apply) 2026-03-05 00:02:15.372698 | orchestrator | + directory_permission = "0777" 2026-03-05 00:02:15.372705 | orchestrator | + file_permission = "0644" 2026-03-05 00:02:15.372718 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-05 00:02:15.372725 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.372731 | orchestrator | } 2026-03-05 00:02:15.372738 | orchestrator | 2026-03-05 00:02:15.372751 | orchestrator | # local_file.inventory will be created 2026-03-05 00:02:15.372759 | orchestrator | + resource "local_file" "inventory" { 2026-03-05 00:02:15.372765 | orchestrator | + content = (known after apply) 2026-03-05 00:02:15.372772 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-05 00:02:15.372779 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-05 00:02:15.372786 | orchestrator | + content_md5 = (known after apply) 2026-03-05 00:02:15.372792 | orchestrator | + content_sha1 = (known after apply) 2026-03-05 00:02:15.372799 | orchestrator | + content_sha256 = (known after apply) 2026-03-05 00:02:15.372806 | orchestrator | + content_sha512 = (known after apply) 2026-03-05 00:02:15.372813 | orchestrator | + directory_permission = "0777" 2026-03-05 00:02:15.372820 | orchestrator | + file_permission = "0644" 2026-03-05 00:02:15.372827 | orchestrator | + filename = "inventory.ci" 2026-03-05 00:02:15.372834 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.372840 | orchestrator | } 2026-03-05 00:02:15.372847 | orchestrator | 2026-03-05 00:02:15.372854 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-05 00:02:15.372861 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-05 00:02:15.372868 | orchestrator | + content = (sensitive value) 2026-03-05 00:02:15.372875 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-05 00:02:15.372882 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-05 00:02:15.372889 | orchestrator | + content_md5 = (known after apply) 2026-03-05 00:02:15.372896 | orchestrator | + content_sha1 = (known after apply) 2026-03-05 00:02:15.372903 | orchestrator | + content_sha256 = (known after apply) 2026-03-05 00:02:15.372909 | orchestrator | + content_sha512 = (known after apply) 2026-03-05 00:02:15.372916 | orchestrator | + directory_permission = "0700" 2026-03-05 00:02:15.372923 | orchestrator | + file_permission = "0600" 2026-03-05 00:02:15.372930 | orchestrator | + filename = ".id_rsa.ci" 2026-03-05 00:02:15.372936 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.372960 | orchestrator | } 2026-03-05 00:02:15.372967 | orchestrator | 2026-03-05 00:02:15.372974 | orchestrator | # null_resource.node_semaphore will be created 2026-03-05 00:02:15.372981 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-05 00:02:15.372988 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.372995 | orchestrator | } 2026-03-05 00:02:15.373005 | orchestrator | 2026-03-05 00:02:15.373012 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-05 00:02:15.373019 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-05 00:02:15.373026 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373032 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373039 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373046 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.373052 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373060 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-05 00:02:15.373067 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373074 | orchestrator | + size = 80 2026-03-05 00:02:15.373081 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373088 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373095 | orchestrator | } 2026-03-05 00:02:15.373102 | orchestrator | 2026-03-05 00:02:15.373109 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-05 00:02:15.373115 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:15.373122 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373129 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373136 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373147 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.373154 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373161 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-05 00:02:15.373168 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373174 | orchestrator | + size = 80 2026-03-05 00:02:15.373181 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373187 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373193 | orchestrator | } 2026-03-05 00:02:15.373199 | orchestrator | 2026-03-05 00:02:15.373205 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-05 00:02:15.373212 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:15.373217 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373223 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373228 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373234 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.373239 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373244 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-05 00:02:15.373250 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373255 | orchestrator | + size = 80 2026-03-05 00:02:15.373260 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373267 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373272 | orchestrator | } 2026-03-05 00:02:15.373277 | orchestrator | 2026-03-05 00:02:15.373283 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-05 00:02:15.373288 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:15.373294 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373299 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373305 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373310 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.373316 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373321 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-05 00:02:15.373327 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373332 | orchestrator | + size = 80 2026-03-05 00:02:15.373338 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373343 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373348 | orchestrator | } 2026-03-05 00:02:15.373355 | orchestrator | 2026-03-05 00:02:15.373361 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-05 00:02:15.373367 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:15.373374 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373380 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373387 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373394 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.373400 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373411 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-05 00:02:15.373418 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373424 | orchestrator | + size = 80 2026-03-05 00:02:15.373431 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373438 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373443 | orchestrator | } 2026-03-05 00:02:15.373453 | orchestrator | 2026-03-05 00:02:15.373460 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-05 00:02:15.373467 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:15.373473 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373480 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373486 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373498 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.373504 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373510 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-05 00:02:15.373517 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373523 | orchestrator | + size = 80 2026-03-05 00:02:15.373530 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373536 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373543 | orchestrator | } 2026-03-05 00:02:15.373550 | orchestrator | 2026-03-05 00:02:15.373557 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-05 00:02:15.373563 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-05 00:02:15.373570 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373576 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373584 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373590 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.373596 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373603 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-05 00:02:15.373610 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373616 | orchestrator | + size = 80 2026-03-05 00:02:15.373623 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373629 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373636 | orchestrator | } 2026-03-05 00:02:15.373643 | orchestrator | 2026-03-05 00:02:15.373649 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-05 00:02:15.373656 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:15.373663 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373669 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373675 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373682 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373688 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-05 00:02:15.373694 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373701 | orchestrator | + size = 20 2026-03-05 00:02:15.373707 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373714 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373721 | orchestrator | } 2026-03-05 00:02:15.373727 | orchestrator | 2026-03-05 00:02:15.373734 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-05 00:02:15.373741 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:15.373746 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373753 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373760 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373767 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373774 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-05 00:02:15.373780 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373787 | orchestrator | + size = 20 2026-03-05 00:02:15.373793 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373800 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373806 | orchestrator | } 2026-03-05 00:02:15.373813 | orchestrator | 2026-03-05 00:02:15.373819 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-05 00:02:15.373826 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:15.373832 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373839 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373846 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373852 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373859 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-05 00:02:15.373865 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.373876 | orchestrator | + size = 20 2026-03-05 00:02:15.373882 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.373889 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.373895 | orchestrator | } 2026-03-05 00:02:15.373902 | orchestrator | 2026-03-05 00:02:15.373909 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-05 00:02:15.373916 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:15.373922 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.373929 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.373935 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.373983 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.373991 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-05 00:02:15.373996 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.374003 | orchestrator | + size = 20 2026-03-05 00:02:15.374011 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.374051 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.374059 | orchestrator | } 2026-03-05 00:02:15.374067 | orchestrator | 2026-03-05 00:02:15.374074 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-05 00:02:15.374081 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:15.374088 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.374095 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.374102 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.374109 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.374116 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-05 00:02:15.374124 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.374135 | orchestrator | + size = 20 2026-03-05 00:02:15.374143 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.374150 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.374157 | orchestrator | } 2026-03-05 00:02:15.374168 | orchestrator | 2026-03-05 00:02:15.374175 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-05 00:02:15.374181 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:15.374188 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.374194 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.374201 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.374208 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.374214 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-05 00:02:15.374221 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.374229 | orchestrator | + size = 20 2026-03-05 00:02:15.374236 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.374243 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.374251 | orchestrator | } 2026-03-05 00:02:15.374258 | orchestrator | 2026-03-05 00:02:15.374266 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-05 00:02:15.374273 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:15.374280 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.374288 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.374295 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.374302 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.374309 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-05 00:02:15.374357 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.374367 | orchestrator | + size = 20 2026-03-05 00:02:15.374374 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.374380 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.374387 | orchestrator | } 2026-03-05 00:02:15.374394 | orchestrator | 2026-03-05 00:02:15.374401 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-05 00:02:15.374408 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:15.374421 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.374428 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.374435 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.374442 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.374448 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-05 00:02:15.374455 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.374463 | orchestrator | + size = 20 2026-03-05 00:02:15.374470 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.374476 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.374483 | orchestrator | } 2026-03-05 00:02:15.374490 | orchestrator | 2026-03-05 00:02:15.374497 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-05 00:02:15.374504 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-05 00:02:15.374512 | orchestrator | + attachment = (known after apply) 2026-03-05 00:02:15.374518 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.374525 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.374532 | orchestrator | + metadata = (known after apply) 2026-03-05 00:02:15.374539 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-05 00:02:15.374546 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.374552 | orchestrator | + size = 20 2026-03-05 00:02:15.374559 | orchestrator | + volume_retype_policy = "never" 2026-03-05 00:02:15.374567 | orchestrator | + volume_type = "ssd" 2026-03-05 00:02:15.374574 | orchestrator | } 2026-03-05 00:02:15.374580 | orchestrator | 2026-03-05 00:02:15.374588 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-05 00:02:15.374594 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-05 00:02:15.374602 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:15.374608 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:15.374615 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:15.374622 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.374629 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.374636 | orchestrator | + config_drive = true 2026-03-05 00:02:15.374642 | orchestrator | + created = (known after apply) 2026-03-05 00:02:15.374649 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:15.374656 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-05 00:02:15.374663 | orchestrator | + force_delete = false 2026-03-05 00:02:15.374670 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:15.374677 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.374684 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.374691 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:15.374698 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:15.374705 | orchestrator | + name = "testbed-manager" 2026-03-05 00:02:15.374712 | orchestrator | + power_state = "active" 2026-03-05 00:02:15.374719 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.374726 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:15.374733 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:15.374740 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:15.374747 | orchestrator | + user_data = (sensitive value) 2026-03-05 00:02:15.374754 | orchestrator | 2026-03-05 00:02:15.374761 | orchestrator | + block_device { 2026-03-05 00:02:15.374768 | orchestrator | + boot_index = 0 2026-03-05 00:02:15.374775 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:15.374786 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:15.374794 | orchestrator | + multiattach = false 2026-03-05 00:02:15.374800 | orchestrator | + source_type = "volume" 2026-03-05 00:02:15.374807 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.374823 | orchestrator | } 2026-03-05 00:02:15.374830 | orchestrator | 2026-03-05 00:02:15.374837 | orchestrator | + network { 2026-03-05 00:02:15.374844 | orchestrator | + access_network = false 2026-03-05 00:02:15.374851 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:15.374858 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:15.374865 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:15.374872 | orchestrator | + name = (known after apply) 2026-03-05 00:02:15.374879 | orchestrator | + port = (known after apply) 2026-03-05 00:02:15.374886 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.374893 | orchestrator | } 2026-03-05 00:02:15.374899 | orchestrator | } 2026-03-05 00:02:15.374911 | orchestrator | 2026-03-05 00:02:15.374918 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-05 00:02:15.374925 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:15.374932 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:15.374953 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:15.374961 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:15.374967 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.374973 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.374980 | orchestrator | + config_drive = true 2026-03-05 00:02:15.374987 | orchestrator | + created = (known after apply) 2026-03-05 00:02:15.374994 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:15.375001 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:15.375007 | orchestrator | + force_delete = false 2026-03-05 00:02:15.375014 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:15.375021 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.375028 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.375035 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:15.375042 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:15.375049 | orchestrator | + name = "testbed-node-0" 2026-03-05 00:02:15.375056 | orchestrator | + power_state = "active" 2026-03-05 00:02:15.375063 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.375070 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:15.375076 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:15.375083 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:15.375090 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:15.375097 | orchestrator | 2026-03-05 00:02:15.375104 | orchestrator | + block_device { 2026-03-05 00:02:15.375111 | orchestrator | + boot_index = 0 2026-03-05 00:02:15.375118 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:15.375125 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:15.375132 | orchestrator | + multiattach = false 2026-03-05 00:02:15.375138 | orchestrator | + source_type = "volume" 2026-03-05 00:02:15.375146 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.375153 | orchestrator | } 2026-03-05 00:02:15.375160 | orchestrator | 2026-03-05 00:02:15.375167 | orchestrator | + network { 2026-03-05 00:02:15.375174 | orchestrator | + access_network = false 2026-03-05 00:02:15.375181 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:15.375188 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:15.375195 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:15.375201 | orchestrator | + name = (known after apply) 2026-03-05 00:02:15.375208 | orchestrator | + port = (known after apply) 2026-03-05 00:02:15.375215 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.375222 | orchestrator | } 2026-03-05 00:02:15.375229 | orchestrator | } 2026-03-05 00:02:15.375236 | orchestrator | 2026-03-05 00:02:15.375243 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-05 00:02:15.375249 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:15.375256 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:15.375267 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:15.375275 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:15.375281 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.375288 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.375295 | orchestrator | + config_drive = true 2026-03-05 00:02:15.375302 | orchestrator | + created = (known after apply) 2026-03-05 00:02:15.375309 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:15.375316 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:15.375323 | orchestrator | + force_delete = false 2026-03-05 00:02:15.375329 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:15.375336 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.375343 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.375350 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:15.375357 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:15.375364 | orchestrator | + name = "testbed-node-1" 2026-03-05 00:02:15.375371 | orchestrator | + power_state = "active" 2026-03-05 00:02:15.375377 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.375384 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:15.375391 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:15.375397 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:15.375404 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:15.375411 | orchestrator | 2026-03-05 00:02:15.375418 | orchestrator | + block_device { 2026-03-05 00:02:15.375425 | orchestrator | + boot_index = 0 2026-03-05 00:02:15.375432 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:15.375439 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:15.375445 | orchestrator | + multiattach = false 2026-03-05 00:02:15.375452 | orchestrator | + source_type = "volume" 2026-03-05 00:02:15.375459 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.375466 | orchestrator | } 2026-03-05 00:02:15.375472 | orchestrator | 2026-03-05 00:02:15.375479 | orchestrator | + network { 2026-03-05 00:02:15.375486 | orchestrator | + access_network = false 2026-03-05 00:02:15.375493 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:15.375500 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:15.375507 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:15.375514 | orchestrator | + name = (known after apply) 2026-03-05 00:02:15.375521 | orchestrator | + port = (known after apply) 2026-03-05 00:02:15.375528 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.375535 | orchestrator | } 2026-03-05 00:02:15.375541 | orchestrator | } 2026-03-05 00:02:15.375548 | orchestrator | 2026-03-05 00:02:15.375555 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-05 00:02:15.375562 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:15.375569 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:15.375576 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:15.375584 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:15.375591 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.375601 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.375608 | orchestrator | + config_drive = true 2026-03-05 00:02:15.375620 | orchestrator | + created = (known after apply) 2026-03-05 00:02:15.375628 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:15.375635 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:15.375641 | orchestrator | + force_delete = false 2026-03-05 00:02:15.375649 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:15.375655 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.375662 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.375674 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:15.375680 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:15.375687 | orchestrator | + name = "testbed-node-2" 2026-03-05 00:02:15.375694 | orchestrator | + power_state = "active" 2026-03-05 00:02:15.375701 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.375708 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:15.375714 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:15.375721 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:15.375728 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:15.375734 | orchestrator | 2026-03-05 00:02:15.375741 | orchestrator | + block_device { 2026-03-05 00:02:15.375748 | orchestrator | + boot_index = 0 2026-03-05 00:02:15.375755 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:15.375761 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:15.375768 | orchestrator | + multiattach = false 2026-03-05 00:02:15.375775 | orchestrator | + source_type = "volume" 2026-03-05 00:02:15.375782 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.375788 | orchestrator | } 2026-03-05 00:02:15.375795 | orchestrator | 2026-03-05 00:02:15.375802 | orchestrator | + network { 2026-03-05 00:02:15.375808 | orchestrator | + access_network = false 2026-03-05 00:02:15.375815 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:15.375822 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:15.375829 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:15.375836 | orchestrator | + name = (known after apply) 2026-03-05 00:02:15.375843 | orchestrator | + port = (known after apply) 2026-03-05 00:02:15.375850 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.375857 | orchestrator | } 2026-03-05 00:02:15.375864 | orchestrator | } 2026-03-05 00:02:15.375871 | orchestrator | 2026-03-05 00:02:15.375878 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-05 00:02:15.375885 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:15.375892 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:15.375899 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:15.375906 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:15.375912 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.375919 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.375926 | orchestrator | + config_drive = true 2026-03-05 00:02:15.375933 | orchestrator | + created = (known after apply) 2026-03-05 00:02:15.375952 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:15.375960 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:15.375967 | orchestrator | + force_delete = false 2026-03-05 00:02:15.375974 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:15.375980 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.375987 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.375994 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:15.376001 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:15.376008 | orchestrator | + name = "testbed-node-3" 2026-03-05 00:02:15.376015 | orchestrator | + power_state = "active" 2026-03-05 00:02:15.376021 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376028 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:15.376035 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:15.376042 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:15.376049 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:15.376056 | orchestrator | 2026-03-05 00:02:15.376063 | orchestrator | + block_device { 2026-03-05 00:02:15.376073 | orchestrator | + boot_index = 0 2026-03-05 00:02:15.376080 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:15.376087 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:15.376098 | orchestrator | + multiattach = false 2026-03-05 00:02:15.376105 | orchestrator | + source_type = "volume" 2026-03-05 00:02:15.376112 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.376119 | orchestrator | } 2026-03-05 00:02:15.376126 | orchestrator | 2026-03-05 00:02:15.376132 | orchestrator | + network { 2026-03-05 00:02:15.376139 | orchestrator | + access_network = false 2026-03-05 00:02:15.376146 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:15.376153 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:15.376159 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:15.376166 | orchestrator | + name = (known after apply) 2026-03-05 00:02:15.376173 | orchestrator | + port = (known after apply) 2026-03-05 00:02:15.376180 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.376186 | orchestrator | } 2026-03-05 00:02:15.376192 | orchestrator | } 2026-03-05 00:02:15.376198 | orchestrator | 2026-03-05 00:02:15.376204 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-05 00:02:15.376210 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:15.376216 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:15.376221 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:15.376226 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:15.376231 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.376237 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.376243 | orchestrator | + config_drive = true 2026-03-05 00:02:15.376249 | orchestrator | + created = (known after apply) 2026-03-05 00:02:15.376254 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:15.376260 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:15.376265 | orchestrator | + force_delete = false 2026-03-05 00:02:15.376271 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:15.376277 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376283 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.376289 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:15.376295 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:15.376301 | orchestrator | + name = "testbed-node-4" 2026-03-05 00:02:15.376313 | orchestrator | + power_state = "active" 2026-03-05 00:02:15.376317 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376321 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:15.376325 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:15.376328 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:15.376332 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:15.376336 | orchestrator | 2026-03-05 00:02:15.376340 | orchestrator | + block_device { 2026-03-05 00:02:15.376344 | orchestrator | + boot_index = 0 2026-03-05 00:02:15.376348 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:15.376351 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:15.376355 | orchestrator | + multiattach = false 2026-03-05 00:02:15.376359 | orchestrator | + source_type = "volume" 2026-03-05 00:02:15.376362 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.376366 | orchestrator | } 2026-03-05 00:02:15.376370 | orchestrator | 2026-03-05 00:02:15.376374 | orchestrator | + network { 2026-03-05 00:02:15.376378 | orchestrator | + access_network = false 2026-03-05 00:02:15.376381 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:15.376385 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:15.376389 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:15.376393 | orchestrator | + name = (known after apply) 2026-03-05 00:02:15.376396 | orchestrator | + port = (known after apply) 2026-03-05 00:02:15.376400 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.376404 | orchestrator | } 2026-03-05 00:02:15.376408 | orchestrator | } 2026-03-05 00:02:15.376416 | orchestrator | 2026-03-05 00:02:15.376420 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-05 00:02:15.376424 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-05 00:02:15.376428 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-05 00:02:15.376431 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-05 00:02:15.376435 | orchestrator | + all_metadata = (known after apply) 2026-03-05 00:02:15.376439 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.376443 | orchestrator | + availability_zone = "nova" 2026-03-05 00:02:15.376446 | orchestrator | + config_drive = true 2026-03-05 00:02:15.376450 | orchestrator | + created = (known after apply) 2026-03-05 00:02:15.376454 | orchestrator | + flavor_id = (known after apply) 2026-03-05 00:02:15.376458 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-05 00:02:15.376462 | orchestrator | + force_delete = false 2026-03-05 00:02:15.376468 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-05 00:02:15.376472 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376476 | orchestrator | + image_id = (known after apply) 2026-03-05 00:02:15.376480 | orchestrator | + image_name = (known after apply) 2026-03-05 00:02:15.376483 | orchestrator | + key_pair = "testbed" 2026-03-05 00:02:15.376487 | orchestrator | + name = "testbed-node-5" 2026-03-05 00:02:15.376491 | orchestrator | + power_state = "active" 2026-03-05 00:02:15.376495 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376498 | orchestrator | + security_groups = (known after apply) 2026-03-05 00:02:15.376502 | orchestrator | + stop_before_destroy = false 2026-03-05 00:02:15.376506 | orchestrator | + updated = (known after apply) 2026-03-05 00:02:15.376510 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-05 00:02:15.376513 | orchestrator | 2026-03-05 00:02:15.376517 | orchestrator | + block_device { 2026-03-05 00:02:15.376521 | orchestrator | + boot_index = 0 2026-03-05 00:02:15.376525 | orchestrator | + delete_on_termination = false 2026-03-05 00:02:15.376529 | orchestrator | + destination_type = "volume" 2026-03-05 00:02:15.376532 | orchestrator | + multiattach = false 2026-03-05 00:02:15.376536 | orchestrator | + source_type = "volume" 2026-03-05 00:02:15.376540 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.376544 | orchestrator | } 2026-03-05 00:02:15.376548 | orchestrator | 2026-03-05 00:02:15.376552 | orchestrator | + network { 2026-03-05 00:02:15.376555 | orchestrator | + access_network = false 2026-03-05 00:02:15.376559 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-05 00:02:15.376563 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-05 00:02:15.376567 | orchestrator | + mac = (known after apply) 2026-03-05 00:02:15.376571 | orchestrator | + name = (known after apply) 2026-03-05 00:02:15.376575 | orchestrator | + port = (known after apply) 2026-03-05 00:02:15.376578 | orchestrator | + uuid = (known after apply) 2026-03-05 00:02:15.376582 | orchestrator | } 2026-03-05 00:02:15.376586 | orchestrator | } 2026-03-05 00:02:15.376590 | orchestrator | 2026-03-05 00:02:15.376594 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-05 00:02:15.376597 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-05 00:02:15.376601 | orchestrator | + fingerprint = (known after apply) 2026-03-05 00:02:15.376605 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376609 | orchestrator | + name = "testbed" 2026-03-05 00:02:15.376613 | orchestrator | + private_key = (sensitive value) 2026-03-05 00:02:15.376616 | orchestrator | + public_key = (known after apply) 2026-03-05 00:02:15.376620 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376624 | orchestrator | + user_id = (known after apply) 2026-03-05 00:02:15.376628 | orchestrator | } 2026-03-05 00:02:15.376632 | orchestrator | 2026-03-05 00:02:15.376636 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-05 00:02:15.376639 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:15.376646 | orchestrator | + device = (known after apply) 2026-03-05 00:02:15.376650 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376654 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:15.376657 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376661 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:15.376665 | orchestrator | } 2026-03-05 00:02:15.376668 | orchestrator | 2026-03-05 00:02:15.376672 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-05 00:02:15.376676 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:15.376680 | orchestrator | + device = (known after apply) 2026-03-05 00:02:15.376684 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376688 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:15.376691 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376695 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:15.376699 | orchestrator | } 2026-03-05 00:02:15.376703 | orchestrator | 2026-03-05 00:02:15.376706 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-05 00:02:15.376710 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:15.376717 | orchestrator | + device = (known after apply) 2026-03-05 00:02:15.376720 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376724 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:15.376728 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376732 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:15.376735 | orchestrator | } 2026-03-05 00:02:15.376739 | orchestrator | 2026-03-05 00:02:15.376743 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-05 00:02:15.376747 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:15.376750 | orchestrator | + device = (known after apply) 2026-03-05 00:02:15.376754 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376758 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:15.376762 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376765 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:15.376769 | orchestrator | } 2026-03-05 00:02:15.376773 | orchestrator | 2026-03-05 00:02:15.376777 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-05 00:02:15.376781 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:15.376784 | orchestrator | + device = (known after apply) 2026-03-05 00:02:15.376788 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376792 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:15.376798 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376802 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:15.376806 | orchestrator | } 2026-03-05 00:02:15.376809 | orchestrator | 2026-03-05 00:02:15.376813 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-05 00:02:15.376817 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:15.376821 | orchestrator | + device = (known after apply) 2026-03-05 00:02:15.376824 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376828 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:15.376832 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376836 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:15.376839 | orchestrator | } 2026-03-05 00:02:15.376843 | orchestrator | 2026-03-05 00:02:15.376847 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-05 00:02:15.376851 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:15.376854 | orchestrator | + device = (known after apply) 2026-03-05 00:02:15.376858 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376862 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:15.376866 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376872 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:15.376876 | orchestrator | } 2026-03-05 00:02:15.376879 | orchestrator | 2026-03-05 00:02:15.376883 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-05 00:02:15.376887 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:15.376891 | orchestrator | + device = (known after apply) 2026-03-05 00:02:15.376895 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376898 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:15.376902 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.376906 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:15.376910 | orchestrator | } 2026-03-05 00:02:15.376913 | orchestrator | 2026-03-05 00:02:15.376917 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-05 00:02:15.376921 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-05 00:02:15.376925 | orchestrator | + device = (known after apply) 2026-03-05 00:02:15.376929 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.376932 | orchestrator | + instance_id = (known after apply) 2026-03-05 00:02:15.376936 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.377022 | orchestrator | + volume_id = (known after apply) 2026-03-05 00:02:15.377036 | orchestrator | } 2026-03-05 00:02:15.377040 | orchestrator | 2026-03-05 00:02:15.377044 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-05 00:02:15.377048 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-05 00:02:15.377052 | orchestrator | + fixed_ip = (known after apply) 2026-03-05 00:02:15.377056 | orchestrator | + floating_ip = (known after apply) 2026-03-05 00:02:15.377060 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.377064 | orchestrator | + port_id = (known after apply) 2026-03-05 00:02:15.377067 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.377071 | orchestrator | } 2026-03-05 00:02:15.377075 | orchestrator | 2026-03-05 00:02:15.377079 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-05 00:02:15.377083 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-05 00:02:15.377086 | orchestrator | + address = (known after apply) 2026-03-05 00:02:15.377090 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.377094 | orchestrator | + dns_domain = (known after apply) 2026-03-05 00:02:15.377098 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:15.377101 | orchestrator | + fixed_ip = (known after apply) 2026-03-05 00:02:15.377105 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.377109 | orchestrator | + pool = "public" 2026-03-05 00:02:15.377113 | orchestrator | + port_id = (known after apply) 2026-03-05 00:02:15.377117 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.377120 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:15.377124 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.377128 | orchestrator | } 2026-03-05 00:02:15.377132 | orchestrator | 2026-03-05 00:02:15.377135 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-05 00:02:15.377139 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-05 00:02:15.377143 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:15.377147 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.377151 | orchestrator | + availability_zone_hints = [ 2026-03-05 00:02:15.377155 | orchestrator | + "nova", 2026-03-05 00:02:15.377159 | orchestrator | ] 2026-03-05 00:02:15.377163 | orchestrator | + dns_domain = (known after apply) 2026-03-05 00:02:15.377167 | orchestrator | + external = (known after apply) 2026-03-05 00:02:15.377170 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.377174 | orchestrator | + mtu = (known after apply) 2026-03-05 00:02:15.377178 | orchestrator | + name = "net-testbed-management" 2026-03-05 00:02:15.377186 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:15.377195 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:15.377199 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.377203 | orchestrator | + shared = (known after apply) 2026-03-05 00:02:15.377207 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.377211 | orchestrator | + transparent_vlan = (known after apply) 2026-03-05 00:02:15.377215 | orchestrator | 2026-03-05 00:02:15.377218 | orchestrator | + segments (known after apply) 2026-03-05 00:02:15.377222 | orchestrator | } 2026-03-05 00:02:15.377226 | orchestrator | 2026-03-05 00:02:15.377230 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-05 00:02:15.377234 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-05 00:02:15.377237 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:15.377241 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:15.377245 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:15.377252 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.377256 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:15.377260 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:15.377264 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:15.377267 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:15.377271 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.377275 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:15.377279 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:15.377283 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:15.377286 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:15.377290 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.377294 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:15.377298 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.377301 | orchestrator | 2026-03-05 00:02:15.377305 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377309 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:15.377313 | orchestrator | } 2026-03-05 00:02:15.377317 | orchestrator | 2026-03-05 00:02:15.377320 | orchestrator | + binding (known after apply) 2026-03-05 00:02:15.377324 | orchestrator | 2026-03-05 00:02:15.377328 | orchestrator | + fixed_ip { 2026-03-05 00:02:15.377332 | orchestrator | + ip_address = "192.168.16.5" 2026-03-05 00:02:15.377336 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:15.377340 | orchestrator | } 2026-03-05 00:02:15.377343 | orchestrator | } 2026-03-05 00:02:15.377347 | orchestrator | 2026-03-05 00:02:15.377351 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-05 00:02:15.377355 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:15.377358 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:15.377362 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:15.377366 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:15.377370 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.377374 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:15.377377 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:15.377381 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:15.377385 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:15.377389 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.377392 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:15.377396 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:15.377400 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:15.377404 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:15.377407 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.377414 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:15.377418 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.377421 | orchestrator | 2026-03-05 00:02:15.377425 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377429 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:15.377433 | orchestrator | } 2026-03-05 00:02:15.377437 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377441 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:15.377444 | orchestrator | } 2026-03-05 00:02:15.377448 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377452 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:15.377456 | orchestrator | } 2026-03-05 00:02:15.377459 | orchestrator | 2026-03-05 00:02:15.377463 | orchestrator | + binding (known after apply) 2026-03-05 00:02:15.377467 | orchestrator | 2026-03-05 00:02:15.377471 | orchestrator | + fixed_ip { 2026-03-05 00:02:15.377474 | orchestrator | + ip_address = "192.168.16.10" 2026-03-05 00:02:15.377478 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:15.377482 | orchestrator | } 2026-03-05 00:02:15.377486 | orchestrator | } 2026-03-05 00:02:15.377489 | orchestrator | 2026-03-05 00:02:15.377493 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-05 00:02:15.377497 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:15.377501 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:15.377505 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:15.377508 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:15.377512 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.377516 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:15.377520 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:15.377524 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:15.377527 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:15.377531 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.377535 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:15.377539 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:15.377542 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:15.377546 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:15.377550 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.377554 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:15.377557 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.377561 | orchestrator | 2026-03-05 00:02:15.377565 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377569 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:15.377573 | orchestrator | } 2026-03-05 00:02:15.377576 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377585 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:15.377589 | orchestrator | } 2026-03-05 00:02:15.377593 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377597 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:15.377601 | orchestrator | } 2026-03-05 00:02:15.377604 | orchestrator | 2026-03-05 00:02:15.377608 | orchestrator | + binding (known after apply) 2026-03-05 00:02:15.377612 | orchestrator | 2026-03-05 00:02:15.377616 | orchestrator | + fixed_ip { 2026-03-05 00:02:15.377620 | orchestrator | + ip_address = "192.168.16.11" 2026-03-05 00:02:15.377623 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:15.377627 | orchestrator | } 2026-03-05 00:02:15.377631 | orchestrator | } 2026-03-05 00:02:15.377635 | orchestrator | 2026-03-05 00:02:15.377638 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-05 00:02:15.377642 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:15.377646 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:15.377650 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:15.377654 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:15.377657 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.377664 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:15.377668 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:15.377671 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:15.377675 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:15.377683 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.377687 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:15.377691 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:15.377695 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:15.377698 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:15.377702 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.377706 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:15.377710 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.377713 | orchestrator | 2026-03-05 00:02:15.377717 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377721 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:15.377725 | orchestrator | } 2026-03-05 00:02:15.377729 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377732 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:15.377736 | orchestrator | } 2026-03-05 00:02:15.377740 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377744 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:15.377748 | orchestrator | } 2026-03-05 00:02:15.377751 | orchestrator | 2026-03-05 00:02:15.377755 | orchestrator | + binding (known after apply) 2026-03-05 00:02:15.377759 | orchestrator | 2026-03-05 00:02:15.377763 | orchestrator | + fixed_ip { 2026-03-05 00:02:15.377766 | orchestrator | + ip_address = "192.168.16.12" 2026-03-05 00:02:15.377770 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:15.377774 | orchestrator | } 2026-03-05 00:02:15.377778 | orchestrator | } 2026-03-05 00:02:15.377782 | orchestrator | 2026-03-05 00:02:15.377785 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-05 00:02:15.377789 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:15.377793 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:15.377797 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:15.377801 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:15.377804 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.377808 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:15.377812 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:15.377816 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:15.377820 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:15.377823 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.377827 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:15.377831 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:15.377835 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:15.377838 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:15.377842 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.377846 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:15.377850 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.377853 | orchestrator | 2026-03-05 00:02:15.377857 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377861 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:15.377865 | orchestrator | } 2026-03-05 00:02:15.377869 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377872 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:15.377876 | orchestrator | } 2026-03-05 00:02:15.377880 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.377884 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:15.377888 | orchestrator | } 2026-03-05 00:02:15.377891 | orchestrator | 2026-03-05 00:02:15.377897 | orchestrator | + binding (known after apply) 2026-03-05 00:02:15.377901 | orchestrator | 2026-03-05 00:02:15.377905 | orchestrator | + fixed_ip { 2026-03-05 00:02:15.377909 | orchestrator | + ip_address = "192.168.16.13" 2026-03-05 00:02:15.377913 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:15.377916 | orchestrator | } 2026-03-05 00:02:15.377920 | orchestrator | } 2026-03-05 00:02:15.377924 | orchestrator | 2026-03-05 00:02:15.377928 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-05 00:02:15.377932 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:15.377936 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:15.377956 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:15.377963 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:15.377968 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.377972 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:15.377976 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:15.377980 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:15.377983 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:15.377987 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.377991 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:15.377995 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:15.377999 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:15.378002 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:15.378006 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378010 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:15.378032 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378037 | orchestrator | 2026-03-05 00:02:15.378044 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.378048 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:15.378051 | orchestrator | } 2026-03-05 00:02:15.378055 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.378059 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:15.378063 | orchestrator | } 2026-03-05 00:02:15.378067 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.378071 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:15.378075 | orchestrator | } 2026-03-05 00:02:15.378079 | orchestrator | 2026-03-05 00:02:15.378082 | orchestrator | + binding (known after apply) 2026-03-05 00:02:15.378086 | orchestrator | 2026-03-05 00:02:15.378090 | orchestrator | + fixed_ip { 2026-03-05 00:02:15.378094 | orchestrator | + ip_address = "192.168.16.14" 2026-03-05 00:02:15.378098 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:15.378101 | orchestrator | } 2026-03-05 00:02:15.378105 | orchestrator | } 2026-03-05 00:02:15.378109 | orchestrator | 2026-03-05 00:02:15.378113 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-05 00:02:15.378117 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-05 00:02:15.378121 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:15.378125 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-05 00:02:15.378128 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-05 00:02:15.378132 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.378136 | orchestrator | + device_id = (known after apply) 2026-03-05 00:02:15.378140 | orchestrator | + device_owner = (known after apply) 2026-03-05 00:02:15.378143 | orchestrator | + dns_assignment = (known after apply) 2026-03-05 00:02:15.378147 | orchestrator | + dns_name = (known after apply) 2026-03-05 00:02:15.378151 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378155 | orchestrator | + mac_address = (known after apply) 2026-03-05 00:02:15.378158 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:15.378162 | orchestrator | + port_security_enabled = (known after apply) 2026-03-05 00:02:15.378166 | orchestrator | + qos_policy_id = (known after apply) 2026-03-05 00:02:15.378173 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378176 | orchestrator | + security_group_ids = (known after apply) 2026-03-05 00:02:15.378180 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378184 | orchestrator | 2026-03-05 00:02:15.378188 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.378192 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-05 00:02:15.378195 | orchestrator | } 2026-03-05 00:02:15.378199 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.378203 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-05 00:02:15.378206 | orchestrator | } 2026-03-05 00:02:15.378210 | orchestrator | + allowed_address_pairs { 2026-03-05 00:02:15.378214 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-05 00:02:15.378218 | orchestrator | } 2026-03-05 00:02:15.378221 | orchestrator | 2026-03-05 00:02:15.378227 | orchestrator | + binding (known after apply) 2026-03-05 00:02:15.378231 | orchestrator | 2026-03-05 00:02:15.378235 | orchestrator | + fixed_ip { 2026-03-05 00:02:15.378239 | orchestrator | + ip_address = "192.168.16.15" 2026-03-05 00:02:15.378243 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:15.378246 | orchestrator | } 2026-03-05 00:02:15.378250 | orchestrator | } 2026-03-05 00:02:15.378254 | orchestrator | 2026-03-05 00:02:15.378258 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-05 00:02:15.378261 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-05 00:02:15.378265 | orchestrator | + force_destroy = false 2026-03-05 00:02:15.378269 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378273 | orchestrator | + port_id = (known after apply) 2026-03-05 00:02:15.378277 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378280 | orchestrator | + router_id = (known after apply) 2026-03-05 00:02:15.378284 | orchestrator | + subnet_id = (known after apply) 2026-03-05 00:02:15.378288 | orchestrator | } 2026-03-05 00:02:15.378292 | orchestrator | 2026-03-05 00:02:15.378295 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-05 00:02:15.378299 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-05 00:02:15.378303 | orchestrator | + admin_state_up = (known after apply) 2026-03-05 00:02:15.378307 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.378310 | orchestrator | + availability_zone_hints = [ 2026-03-05 00:02:15.378314 | orchestrator | + "nova", 2026-03-05 00:02:15.378318 | orchestrator | ] 2026-03-05 00:02:15.378322 | orchestrator | + distributed = (known after apply) 2026-03-05 00:02:15.378325 | orchestrator | + enable_snat = (known after apply) 2026-03-05 00:02:15.378329 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-05 00:02:15.378333 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-05 00:02:15.378337 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378341 | orchestrator | + name = "testbed" 2026-03-05 00:02:15.378344 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378348 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378352 | orchestrator | 2026-03-05 00:02:15.378356 | orchestrator | + external_fixed_ip (known after apply) 2026-03-05 00:02:15.378360 | orchestrator | } 2026-03-05 00:02:15.378363 | orchestrator | 2026-03-05 00:02:15.378367 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-05 00:02:15.378371 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-05 00:02:15.378375 | orchestrator | + description = "ssh" 2026-03-05 00:02:15.378379 | orchestrator | + direction = "ingress" 2026-03-05 00:02:15.378382 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:15.378386 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378390 | orchestrator | + port_range_max = 22 2026-03-05 00:02:15.378393 | orchestrator | + port_range_min = 22 2026-03-05 00:02:15.378397 | orchestrator | + protocol = "tcp" 2026-03-05 00:02:15.378401 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378407 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:15.378411 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:15.378415 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:15.378419 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:15.378422 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378426 | orchestrator | } 2026-03-05 00:02:15.378430 | orchestrator | 2026-03-05 00:02:15.378434 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-05 00:02:15.378438 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-05 00:02:15.378441 | orchestrator | + description = "wireguard" 2026-03-05 00:02:15.378445 | orchestrator | + direction = "ingress" 2026-03-05 00:02:15.378452 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:15.378456 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378460 | orchestrator | + port_range_max = 51820 2026-03-05 00:02:15.378463 | orchestrator | + port_range_min = 51820 2026-03-05 00:02:15.378467 | orchestrator | + protocol = "udp" 2026-03-05 00:02:15.378471 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378475 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:15.378478 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:15.378482 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:15.378486 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:15.378490 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378493 | orchestrator | } 2026-03-05 00:02:15.378497 | orchestrator | 2026-03-05 00:02:15.378501 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-05 00:02:15.378505 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-05 00:02:15.378509 | orchestrator | + direction = "ingress" 2026-03-05 00:02:15.378513 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:15.378516 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378520 | orchestrator | + protocol = "tcp" 2026-03-05 00:02:15.378524 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378528 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:15.378531 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:15.378535 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-05 00:02:15.378539 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:15.378543 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378546 | orchestrator | } 2026-03-05 00:02:15.378550 | orchestrator | 2026-03-05 00:02:15.378554 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-05 00:02:15.378558 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-05 00:02:15.378561 | orchestrator | + direction = "ingress" 2026-03-05 00:02:15.378565 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:15.378569 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378573 | orchestrator | + protocol = "udp" 2026-03-05 00:02:15.378576 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378580 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:15.378584 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:15.378588 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-05 00:02:15.378592 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:15.378595 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378599 | orchestrator | } 2026-03-05 00:02:15.378603 | orchestrator | 2026-03-05 00:02:15.378607 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-05 00:02:15.378613 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-05 00:02:15.378617 | orchestrator | + direction = "ingress" 2026-03-05 00:02:15.378621 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:15.378625 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378629 | orchestrator | + protocol = "icmp" 2026-03-05 00:02:15.378633 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378636 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:15.378640 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:15.378644 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:15.378648 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:15.378651 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378655 | orchestrator | } 2026-03-05 00:02:15.378659 | orchestrator | 2026-03-05 00:02:15.378663 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-05 00:02:15.378667 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-05 00:02:15.378670 | orchestrator | + direction = "ingress" 2026-03-05 00:02:15.378674 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:15.378678 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378682 | orchestrator | + protocol = "tcp" 2026-03-05 00:02:15.378686 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378689 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:15.378695 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:15.378699 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:15.378703 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:15.378707 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378711 | orchestrator | } 2026-03-05 00:02:15.378714 | orchestrator | 2026-03-05 00:02:15.378718 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-05 00:02:15.378722 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-05 00:02:15.378726 | orchestrator | + direction = "ingress" 2026-03-05 00:02:15.378729 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:15.378733 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378737 | orchestrator | + protocol = "udp" 2026-03-05 00:02:15.378741 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378744 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:15.378748 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:15.378752 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:15.378756 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:15.378759 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378763 | orchestrator | } 2026-03-05 00:02:15.378767 | orchestrator | 2026-03-05 00:02:15.378771 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-05 00:02:15.378775 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-05 00:02:15.378781 | orchestrator | + direction = "ingress" 2026-03-05 00:02:15.378787 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:15.378791 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378795 | orchestrator | + protocol = "icmp" 2026-03-05 00:02:15.378798 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378802 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:15.378806 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:15.378810 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:15.378814 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:15.378817 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378825 | orchestrator | } 2026-03-05 00:02:15.378829 | orchestrator | 2026-03-05 00:02:15.378832 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-05 00:02:15.378836 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-05 00:02:15.378840 | orchestrator | + description = "vrrp" 2026-03-05 00:02:15.378844 | orchestrator | + direction = "ingress" 2026-03-05 00:02:15.378847 | orchestrator | + ethertype = "IPv4" 2026-03-05 00:02:15.378851 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378855 | orchestrator | + protocol = "112" 2026-03-05 00:02:15.378859 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378862 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-05 00:02:15.378866 | orchestrator | + remote_group_id = (known after apply) 2026-03-05 00:02:15.378870 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-05 00:02:15.378874 | orchestrator | + security_group_id = (known after apply) 2026-03-05 00:02:15.378877 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378881 | orchestrator | } 2026-03-05 00:02:15.378885 | orchestrator | 2026-03-05 00:02:15.378889 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-05 00:02:15.378893 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-05 00:02:15.378896 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.378900 | orchestrator | + description = "management security group" 2026-03-05 00:02:15.378904 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378908 | orchestrator | + name = "testbed-management" 2026-03-05 00:02:15.378911 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378915 | orchestrator | + stateful = (known after apply) 2026-03-05 00:02:15.378919 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378922 | orchestrator | } 2026-03-05 00:02:15.378926 | orchestrator | 2026-03-05 00:02:15.378930 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-05 00:02:15.378934 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-05 00:02:15.378937 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.378958 | orchestrator | + description = "node security group" 2026-03-05 00:02:15.378961 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.378965 | orchestrator | + name = "testbed-node" 2026-03-05 00:02:15.378969 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.378973 | orchestrator | + stateful = (known after apply) 2026-03-05 00:02:15.378976 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.378980 | orchestrator | } 2026-03-05 00:02:15.378986 | orchestrator | 2026-03-05 00:02:15.378990 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-05 00:02:15.378994 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-05 00:02:15.378997 | orchestrator | + all_tags = (known after apply) 2026-03-05 00:02:15.379001 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-05 00:02:15.379005 | orchestrator | + dns_nameservers = [ 2026-03-05 00:02:15.379009 | orchestrator | + "8.8.8.8", 2026-03-05 00:02:15.379013 | orchestrator | + "9.9.9.9", 2026-03-05 00:02:15.379017 | orchestrator | ] 2026-03-05 00:02:15.379020 | orchestrator | + enable_dhcp = true 2026-03-05 00:02:15.379024 | orchestrator | + gateway_ip = (known after apply) 2026-03-05 00:02:15.379028 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.379032 | orchestrator | + ip_version = 4 2026-03-05 00:02:15.379035 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-05 00:02:15.379039 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-05 00:02:15.379043 | orchestrator | + name = "subnet-testbed-management" 2026-03-05 00:02:15.379047 | orchestrator | + network_id = (known after apply) 2026-03-05 00:02:15.379051 | orchestrator | + no_gateway = false 2026-03-05 00:02:15.379054 | orchestrator | + region = (known after apply) 2026-03-05 00:02:15.379058 | orchestrator | + service_types = (known after apply) 2026-03-05 00:02:15.379065 | orchestrator | + tenant_id = (known after apply) 2026-03-05 00:02:15.379069 | orchestrator | 2026-03-05 00:02:15.379072 | orchestrator | + allocation_pool { 2026-03-05 00:02:15.379076 | orchestrator | + end = "192.168.31.250" 2026-03-05 00:02:15.379080 | orchestrator | + start = "192.168.31.200" 2026-03-05 00:02:15.379084 | orchestrator | } 2026-03-05 00:02:15.379087 | orchestrator | } 2026-03-05 00:02:15.379093 | orchestrator | 2026-03-05 00:02:15.379097 | orchestrator | # terraform_data.image will be created 2026-03-05 00:02:15.379100 | orchestrator | + resource "terraform_data" "image" { 2026-03-05 00:02:15.379104 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.379108 | orchestrator | + input = "Ubuntu 24.04" 2026-03-05 00:02:15.379112 | orchestrator | + output = (known after apply) 2026-03-05 00:02:15.379115 | orchestrator | } 2026-03-05 00:02:15.379119 | orchestrator | 2026-03-05 00:02:15.379123 | orchestrator | # terraform_data.image_node will be created 2026-03-05 00:02:15.379127 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-05 00:02:15.379130 | orchestrator | + id = (known after apply) 2026-03-05 00:02:15.379134 | orchestrator | + input = "Ubuntu 24.04" 2026-03-05 00:02:15.379138 | orchestrator | + output = (known after apply) 2026-03-05 00:02:15.379142 | orchestrator | } 2026-03-05 00:02:15.379145 | orchestrator | 2026-03-05 00:02:15.379149 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-05 00:02:15.379153 | orchestrator | 2026-03-05 00:02:15.379157 | orchestrator | Changes to Outputs: 2026-03-05 00:02:15.379160 | orchestrator | + manager_address = (sensitive value) 2026-03-05 00:02:15.379164 | orchestrator | + private_key = (sensitive value) 2026-03-05 00:02:15.625551 | orchestrator | terraform_data.image: Creating... 2026-03-05 00:02:15.626261 | orchestrator | terraform_data.image_node: Creating... 2026-03-05 00:02:15.626595 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=ea57c6a9-2afb-e23a-67bb-62ec0692b212] 2026-03-05 00:02:15.626825 | orchestrator | terraform_data.image: Creation complete after 0s [id=e23a0950-9c74-9114-5a10-982054b08ab4] 2026-03-05 00:02:15.638288 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-05 00:02:15.638715 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-05 00:02:15.643202 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-05 00:02:15.667786 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-05 00:02:15.698226 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-05 00:02:15.698284 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-05 00:02:15.698290 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-05 00:02:15.698294 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-05 00:02:15.698299 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-05 00:02:15.698303 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-05 00:02:16.163444 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-05 00:02:16.168357 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-05 00:02:16.180902 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-05 00:02:16.185074 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-05 00:02:16.195881 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-05 00:02:16.205666 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-05 00:02:17.042172 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=c8e62470-7b32-4789-a790-5dfb9faf4747] 2026-03-05 00:02:17.061645 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-05 00:02:17.070515 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=c6cab6068d724c9a1958ca20f4fee1c64d9644b3] 2026-03-05 00:02:17.081128 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-05 00:02:17.086443 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=5558424b636f8f35d4b0a363e8c62ce50ce97a16] 2026-03-05 00:02:17.091408 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-05 00:02:19.300571 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=c272dc3f-f5b6-4857-91f2-561a599f15b5] 2026-03-05 00:02:19.315506 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=bc7e009b-77b4-429d-819f-0751386ded0b] 2026-03-05 00:02:19.319635 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-05 00:02:19.323564 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-05 00:02:19.333991 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=177e9830-d762-48d2-8720-88dd872b3a27] 2026-03-05 00:02:19.334924 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=1cde8d38-c9d3-4512-8106-c139834ff42b] 2026-03-05 00:02:19.344637 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-05 00:02:19.345418 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-05 00:02:19.370425 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=886d7f4d-c342-4547-93ea-f5198c18b4a1] 2026-03-05 00:02:19.371120 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=80e7620b-1c7d-40ff-852b-40246feca9c5] 2026-03-05 00:02:19.382065 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-05 00:02:19.382101 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-05 00:02:19.394657 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=9c8197fe-cfc6-470d-b43f-168fdfa4c980] 2026-03-05 00:02:19.404269 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-05 00:02:19.451387 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=e9fbedff-eb29-4e1b-a232-9476e4a5bada] 2026-03-05 00:02:19.460577 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4] 2026-03-05 00:02:20.442921 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=0df443fe-93f7-4210-ab94-8f2e48ea1f52] 2026-03-05 00:02:20.443277 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=2db1203a-2052-495d-93ef-9a5e6f3925a1] 2026-03-05 00:02:20.447355 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-05 00:02:22.709344 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=f3667f1e-5067-4036-b179-f7ed5b88883b] 2026-03-05 00:02:22.733406 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=7818bcf6-78f1-48ba-b92b-b536ad3835fb] 2026-03-05 00:02:22.784691 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=40dcebc6-2ad4-440f-87a2-f05db4a8eb90] 2026-03-05 00:02:22.803802 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=cf23377a-f42e-406a-8eb8-34ba52ccfac6] 2026-03-05 00:02:22.826999 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=9920fd12-02dd-4b62-9dd4-bd789f1a1f90] 2026-03-05 00:02:22.855039 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=c0aa8b33-2596-46db-b782-3e102abbb8d9] 2026-03-05 00:02:23.688830 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=81c404b3-ef98-4dc5-8d17-0a01fed59b77] 2026-03-05 00:02:23.698250 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-05 00:02:23.700447 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-05 00:02:23.700964 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-05 00:02:23.919465 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=1bf6f39f-2287-4455-9713-fc516cc6a75f] 2026-03-05 00:02:23.939105 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=a694ab3b-e1cf-42f4-bc6e-24e898da3864] 2026-03-05 00:02:23.939170 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-05 00:02:23.939651 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-05 00:02:23.940208 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-05 00:02:23.945459 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-05 00:02:23.947736 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-05 00:02:23.954078 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-05 00:02:23.958796 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-05 00:02:23.968778 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-05 00:02:23.969394 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-05 00:02:24.141389 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=3f31bb81-bb60-4ac8-b3c8-a6f36bfa3cbd] 2026-03-05 00:02:24.146291 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-05 00:02:24.336492 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=b63a4537-68eb-45df-8ace-f9b47232f88d] 2026-03-05 00:02:24.340597 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-05 00:02:24.507007 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=1774e1dc-dfb0-4793-9da2-e74bf551fe86] 2026-03-05 00:02:24.512561 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-05 00:02:24.625014 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5d217a7e-af13-466d-8c34-2bed03e788c8] 2026-03-05 00:02:24.627834 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-05 00:02:24.640329 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=6ea2ba19-2c10-4fce-af5b-95ecfb0d6af3] 2026-03-05 00:02:24.643364 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-05 00:02:24.713020 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=24776dfe-299a-4216-8019-b7721684b828] 2026-03-05 00:02:24.717044 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-05 00:02:24.942407 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=11b0f7a5-e691-4142-bcc9-97380c85a70f] 2026-03-05 00:02:24.946893 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-05 00:02:24.963426 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=70c219f0-78ba-4920-aff9-8d7eee0c9b0c] 2026-03-05 00:02:25.051655 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=2826f804-5621-4eb3-a5a5-1bcb4bf5dc32] 2026-03-05 00:02:25.067070 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=89a7141c-b905-4dd8-a80c-14af45984622] 2026-03-05 00:02:25.114932 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=42333f78-b866-43df-9524-dc0950647f0c] 2026-03-05 00:02:25.132880 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=810ab95c-49de-4526-8f0c-c1c92233a9e4] 2026-03-05 00:02:25.133297 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=2cf5e4be-3f7f-46a1-a899-68b9addc74f8] 2026-03-05 00:02:25.260398 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=db3b68e6-aecf-42fa-8bcc-e40cc5d67d8c] 2026-03-05 00:02:25.492505 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=66ec4856-06be-4f90-a8b4-d0aa6f05cb06] 2026-03-05 00:02:25.723192 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=3cd5081f-ca17-4bd6-ac18-56865760e100] 2026-03-05 00:02:27.075570 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=1acadaba-6300-42dc-a8e7-651c65af955a] 2026-03-05 00:02:27.102379 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-05 00:02:27.111309 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-05 00:02:27.125368 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-05 00:02:27.142919 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-05 00:02:27.153145 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-05 00:02:27.154999 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-05 00:02:27.167408 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-05 00:02:29.606728 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=9a7b3a6d-9946-42a6-87cd-e401015ec4cc] 2026-03-05 00:02:29.620214 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-05 00:02:29.620281 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-05 00:02:29.620593 | orchestrator | local_file.inventory: Creating... 2026-03-05 00:02:29.628496 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=119c54e2cba800acb7e10b89487fbaa6b66ae939] 2026-03-05 00:02:29.631893 | orchestrator | local_file.inventory: Creation complete after 0s [id=b7e4a3601c2b8870c447b2d29bc5cb800aed771b] 2026-03-05 00:02:30.417699 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=9a7b3a6d-9946-42a6-87cd-e401015ec4cc] 2026-03-05 00:02:37.111914 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-05 00:02:37.127153 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-05 00:02:37.144538 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-05 00:02:37.164679 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-05 00:02:37.166080 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-05 00:02:37.171481 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-05 00:02:47.112862 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-05 00:02:47.128343 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-05 00:02:47.145733 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-05 00:02:47.165377 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-05 00:02:47.168652 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-05 00:02:47.172129 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-05 00:02:47.914382 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=cd286217-9023-4576-820a-f265c4fe8ed4] 2026-03-05 00:02:57.118401 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-05 00:02:57.128922 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-05 00:02:57.146352 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-05 00:02:57.165849 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-05 00:02:57.173269 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-05 00:02:57.849531 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=8a5fb719-e912-43b0-b2ea-9a2f1e6b2e92] 2026-03-05 00:02:57.916412 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=fe8d300b-7181-4de2-85ff-7eff35f633e9] 2026-03-05 00:02:58.056164 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=387a9ae6-48e7-485a-bd6a-091324f5ad6f] 2026-03-05 00:02:58.457264 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=64425984-96ca-4559-835e-4fbe1f350ba7] 2026-03-05 00:03:07.127495 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-05 00:03:07.986876 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=0c48f3d3-3578-4105-bad4-0680d7a51f19] 2026-03-05 00:03:08.011190 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-05 00:03:08.014616 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5139068718698111485] 2026-03-05 00:03:08.026098 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-05 00:03:08.026167 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-05 00:03:08.032783 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-05 00:03:08.034697 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-05 00:03:08.035785 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-05 00:03:08.036647 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-05 00:03:08.037001 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-05 00:03:08.041449 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-05 00:03:08.050298 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-05 00:03:08.072342 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-05 00:03:11.478459 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=0c48f3d3-3578-4105-bad4-0680d7a51f19/e9fbedff-eb29-4e1b-a232-9476e4a5bada] 2026-03-05 00:03:11.481120 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=387a9ae6-48e7-485a-bd6a-091324f5ad6f/c272dc3f-f5b6-4857-91f2-561a599f15b5] 2026-03-05 00:03:11.495151 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=0c48f3d3-3578-4105-bad4-0680d7a51f19/1cde8d38-c9d3-4512-8106-c139834ff42b] 2026-03-05 00:03:11.519935 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=8a5fb719-e912-43b0-b2ea-9a2f1e6b2e92/80e7620b-1c7d-40ff-852b-40246feca9c5] 2026-03-05 00:03:11.532421 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=387a9ae6-48e7-485a-bd6a-091324f5ad6f/bc7e009b-77b4-429d-819f-0751386ded0b] 2026-03-05 00:03:11.749944 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=8a5fb719-e912-43b0-b2ea-9a2f1e6b2e92/886d7f4d-c342-4547-93ea-f5198c18b4a1] 2026-03-05 00:03:17.608679 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=0c48f3d3-3578-4105-bad4-0680d7a51f19/7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4] 2026-03-05 00:03:17.616889 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=8a5fb719-e912-43b0-b2ea-9a2f1e6b2e92/177e9830-d762-48d2-8720-88dd872b3a27] 2026-03-05 00:03:17.637366 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=387a9ae6-48e7-485a-bd6a-091324f5ad6f/9c8197fe-cfc6-470d-b43f-168fdfa4c980] 2026-03-05 00:03:18.039049 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-05 00:03:28.048746 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-05 00:03:28.565736 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=908592ef-8c39-4ac0-9345-2700a184fd72] 2026-03-05 00:03:28.590966 | orchestrator | 2026-03-05 00:03:28.591132 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-05 00:03:28.591146 | orchestrator | 2026-03-05 00:03:28.591156 | orchestrator | Outputs: 2026-03-05 00:03:28.591165 | orchestrator | 2026-03-05 00:03:28.591186 | orchestrator | manager_address = 2026-03-05 00:03:28.591196 | orchestrator | private_key = 2026-03-05 00:03:29.039368 | orchestrator | ok: Runtime: 0:01:17.484088 2026-03-05 00:03:29.069347 | 2026-03-05 00:03:29.069469 | TASK [Fetch manager address] 2026-03-05 00:03:29.527641 | orchestrator | ok 2026-03-05 00:03:29.540743 | 2026-03-05 00:03:29.540907 | TASK [Set manager_host address] 2026-03-05 00:03:29.623023 | orchestrator | ok 2026-03-05 00:03:29.633407 | 2026-03-05 00:03:29.633572 | LOOP [Update ansible collections] 2026-03-05 00:03:30.626020 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-05 00:03:30.626427 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-05 00:03:30.626486 | orchestrator | Starting galaxy collection install process 2026-03-05 00:03:30.626525 | orchestrator | Process install dependency map 2026-03-05 00:03:30.626560 | orchestrator | Starting collection install process 2026-03-05 00:03:30.626593 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-05 00:03:30.626634 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-05 00:03:30.626686 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-05 00:03:30.626774 | orchestrator | ok: Item: commons Runtime: 0:00:00.643044 2026-03-05 00:03:31.695967 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-05 00:03:31.696139 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-05 00:03:31.696193 | orchestrator | Starting galaxy collection install process 2026-03-05 00:03:31.696236 | orchestrator | Process install dependency map 2026-03-05 00:03:31.696275 | orchestrator | Starting collection install process 2026-03-05 00:03:31.696311 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-05 00:03:31.696349 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-05 00:03:31.696384 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-05 00:03:31.696439 | orchestrator | ok: Item: services Runtime: 0:00:00.791713 2026-03-05 00:03:31.719472 | 2026-03-05 00:03:31.719627 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-05 00:03:42.306564 | orchestrator | ok 2026-03-05 00:03:42.318023 | 2026-03-05 00:03:42.318149 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-05 00:04:42.363809 | orchestrator | ok 2026-03-05 00:04:42.374640 | 2026-03-05 00:04:42.374775 | TASK [Fetch manager ssh hostkey] 2026-03-05 00:04:43.963675 | orchestrator | Output suppressed because no_log was given 2026-03-05 00:04:43.977765 | 2026-03-05 00:04:43.977934 | TASK [Get ssh keypair from terraform environment] 2026-03-05 00:04:44.514521 | orchestrator | ok: Runtime: 0:00:00.008901 2026-03-05 00:04:44.534920 | 2026-03-05 00:04:44.535192 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-05 00:04:44.586762 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-05 00:04:44.596978 | 2026-03-05 00:04:44.597188 | TASK [Run manager part 0] 2026-03-05 00:04:45.434554 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-05 00:04:45.486143 | orchestrator | 2026-03-05 00:04:45.486203 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-05 00:04:45.486212 | orchestrator | 2026-03-05 00:04:45.486226 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-05 00:04:47.567561 | orchestrator | ok: [testbed-manager] 2026-03-05 00:04:47.567616 | orchestrator | 2026-03-05 00:04:47.567643 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-05 00:04:47.567655 | orchestrator | 2026-03-05 00:04:47.567667 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:04:49.626931 | orchestrator | ok: [testbed-manager] 2026-03-05 00:04:49.627021 | orchestrator | 2026-03-05 00:04:49.627046 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-05 00:04:50.383709 | orchestrator | ok: [testbed-manager] 2026-03-05 00:04:50.383768 | orchestrator | 2026-03-05 00:04:50.383779 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-05 00:04:50.433859 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:04:50.433913 | orchestrator | 2026-03-05 00:04:50.433926 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-05 00:04:50.471674 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:04:50.471731 | orchestrator | 2026-03-05 00:04:50.471742 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-05 00:04:50.502282 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:04:50.502329 | orchestrator | 2026-03-05 00:04:50.502335 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-05 00:04:50.535107 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:04:50.535160 | orchestrator | 2026-03-05 00:04:50.535167 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-05 00:04:50.566668 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:04:50.566715 | orchestrator | 2026-03-05 00:04:50.566722 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-05 00:04:50.602297 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:04:50.602360 | orchestrator | 2026-03-05 00:04:50.602373 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-05 00:04:50.633884 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:04:50.633935 | orchestrator | 2026-03-05 00:04:50.633944 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-05 00:04:51.412688 | orchestrator | changed: [testbed-manager] 2026-03-05 00:04:51.412721 | orchestrator | 2026-03-05 00:04:51.412726 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-05 00:07:59.119236 | orchestrator | changed: [testbed-manager] 2026-03-05 00:07:59.119338 | orchestrator | 2026-03-05 00:07:59.119360 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-05 00:10:11.796933 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:11.796999 | orchestrator | 2026-03-05 00:10:11.797010 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-05 00:10:32.679100 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:32.679241 | orchestrator | 2026-03-05 00:10:32.679272 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-05 00:10:44.134542 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:44.134589 | orchestrator | 2026-03-05 00:10:44.134597 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-05 00:10:44.171615 | orchestrator | ok: [testbed-manager] 2026-03-05 00:10:44.171650 | orchestrator | 2026-03-05 00:10:44.171656 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-05 00:10:44.953616 | orchestrator | ok: [testbed-manager] 2026-03-05 00:10:44.953711 | orchestrator | 2026-03-05 00:10:44.953730 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-05 00:10:45.683470 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:45.683551 | orchestrator | 2026-03-05 00:10:45.683567 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-05 00:10:51.633643 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:51.633686 | orchestrator | 2026-03-05 00:10:51.633709 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-05 00:10:57.097085 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:57.097169 | orchestrator | 2026-03-05 00:10:57.097181 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-05 00:10:59.698392 | orchestrator | changed: [testbed-manager] 2026-03-05 00:10:59.698435 | orchestrator | 2026-03-05 00:10:59.698444 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-05 00:11:01.405216 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:01.405306 | orchestrator | 2026-03-05 00:11:01.405323 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-05 00:11:02.506786 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-05 00:11:02.506886 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-05 00:11:02.506902 | orchestrator | 2026-03-05 00:11:02.506915 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-05 00:11:02.554892 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-05 00:11:02.554977 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-05 00:11:02.554988 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-05 00:11:02.554996 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-05 00:11:05.763719 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-05 00:11:05.763905 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-05 00:11:05.763915 | orchestrator | 2026-03-05 00:11:05.763923 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-05 00:11:06.318038 | orchestrator | changed: [testbed-manager] 2026-03-05 00:11:06.318077 | orchestrator | 2026-03-05 00:11:06.318084 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-05 00:15:28.367482 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-05 00:15:28.367563 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-05 00:15:28.367578 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-05 00:15:28.367588 | orchestrator | 2026-03-05 00:15:28.367598 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-05 00:15:30.668096 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-05 00:15:30.668140 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-05 00:15:30.668148 | orchestrator | 2026-03-05 00:15:30.668156 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-05 00:15:30.668163 | orchestrator | 2026-03-05 00:15:30.668169 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:15:32.031980 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:32.032019 | orchestrator | 2026-03-05 00:15:32.032028 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-05 00:15:32.081199 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:32.081240 | orchestrator | 2026-03-05 00:15:32.081248 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-05 00:15:32.150454 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:32.151202 | orchestrator | 2026-03-05 00:15:32.151227 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-05 00:15:32.924546 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:32.924591 | orchestrator | 2026-03-05 00:15:32.924600 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-05 00:15:33.642811 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:33.642927 | orchestrator | 2026-03-05 00:15:33.642955 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-05 00:15:34.947548 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-05 00:15:34.947636 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-05 00:15:34.947661 | orchestrator | 2026-03-05 00:15:34.947703 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-05 00:15:36.313256 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:36.313384 | orchestrator | 2026-03-05 00:15:36.313405 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-05 00:15:38.029855 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:15:38.029902 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-05 00:15:38.029911 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:15:38.029918 | orchestrator | 2026-03-05 00:15:38.029926 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-05 00:15:38.080989 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:38.081037 | orchestrator | 2026-03-05 00:15:38.081046 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-05 00:15:38.149985 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:38.150053 | orchestrator | 2026-03-05 00:15:38.150065 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-05 00:15:38.704993 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:38.705036 | orchestrator | 2026-03-05 00:15:38.705045 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-05 00:15:38.773462 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:38.773502 | orchestrator | 2026-03-05 00:15:38.773510 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-05 00:15:39.599026 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:15:39.599131 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:39.599156 | orchestrator | 2026-03-05 00:15:39.599175 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-05 00:15:39.634922 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:39.635013 | orchestrator | 2026-03-05 00:15:39.635036 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-05 00:15:39.682775 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:39.682814 | orchestrator | 2026-03-05 00:15:39.682849 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-05 00:15:39.721378 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:39.721421 | orchestrator | 2026-03-05 00:15:39.721431 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-05 00:15:39.783091 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:39.783133 | orchestrator | 2026-03-05 00:15:39.783141 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-05 00:15:40.513464 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:40.513567 | orchestrator | 2026-03-05 00:15:40.513764 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-05 00:15:40.513771 | orchestrator | 2026-03-05 00:15:40.513776 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:15:41.816605 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:41.816718 | orchestrator | 2026-03-05 00:15:41.816747 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-05 00:15:42.766260 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:42.767329 | orchestrator | 2026-03-05 00:15:42.767354 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:15:42.767371 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-05 00:15:42.767383 | orchestrator | 2026-03-05 00:15:43.076908 | orchestrator | ok: Runtime: 0:10:57.978740 2026-03-05 00:15:43.097928 | 2026-03-05 00:15:43.098082 | TASK [Point out that the log in on the manager is now possible] 2026-03-05 00:15:43.142042 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-05 00:15:43.150313 | 2026-03-05 00:15:43.150420 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-05 00:15:43.184123 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-05 00:15:43.192486 | 2026-03-05 00:15:43.192599 | TASK [Run manager part 1 + 2] 2026-03-05 00:15:44.071753 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-05 00:15:44.130809 | orchestrator | 2026-03-05 00:15:44.130887 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-05 00:15:44.130895 | orchestrator | 2026-03-05 00:15:44.130908 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:15:47.008371 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:47.008461 | orchestrator | 2026-03-05 00:15:47.008515 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-05 00:15:47.050357 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:47.050416 | orchestrator | 2026-03-05 00:15:47.050428 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-05 00:15:47.094048 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:47.094120 | orchestrator | 2026-03-05 00:15:47.094130 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-05 00:15:47.137205 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:47.137263 | orchestrator | 2026-03-05 00:15:47.137271 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-05 00:15:47.206871 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:47.206911 | orchestrator | 2026-03-05 00:15:47.206919 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-05 00:15:47.273025 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:47.273064 | orchestrator | 2026-03-05 00:15:47.273071 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-05 00:15:47.312857 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-05 00:15:47.312921 | orchestrator | 2026-03-05 00:15:47.312931 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-05 00:15:48.014591 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:48.014740 | orchestrator | 2026-03-05 00:15:48.014765 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-05 00:15:48.061644 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:15:48.061701 | orchestrator | 2026-03-05 00:15:48.061707 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-05 00:15:49.420503 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:49.420564 | orchestrator | 2026-03-05 00:15:49.420573 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-05 00:15:49.958063 | orchestrator | ok: [testbed-manager] 2026-03-05 00:15:49.958133 | orchestrator | 2026-03-05 00:15:49.958144 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-05 00:15:51.122944 | orchestrator | changed: [testbed-manager] 2026-03-05 00:15:51.123016 | orchestrator | 2026-03-05 00:15:51.123034 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-05 00:16:05.861729 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:05.861858 | orchestrator | 2026-03-05 00:16:05.861874 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-05 00:16:06.529883 | orchestrator | ok: [testbed-manager] 2026-03-05 00:16:06.529978 | orchestrator | 2026-03-05 00:16:06.529997 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-05 00:16:06.583744 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:16:06.583860 | orchestrator | 2026-03-05 00:16:06.583877 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-05 00:16:07.484731 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:07.484851 | orchestrator | 2026-03-05 00:16:07.484868 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-05 00:16:08.420868 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:08.420966 | orchestrator | 2026-03-05 00:16:08.420987 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-05 00:16:08.976966 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:08.977062 | orchestrator | 2026-03-05 00:16:08.977078 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-05 00:16:09.022554 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-05 00:16:09.022688 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-05 00:16:09.022707 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-05 00:16:09.022720 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-05 00:16:11.100000 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:11.100051 | orchestrator | 2026-03-05 00:16:11.100060 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-05 00:16:19.939947 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-05 00:16:19.940096 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-05 00:16:19.940128 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-05 00:16:19.940151 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-05 00:16:19.940183 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-05 00:16:19.940205 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-05 00:16:19.940225 | orchestrator | 2026-03-05 00:16:19.940238 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-05 00:16:21.722108 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:21.722204 | orchestrator | 2026-03-05 00:16:21.722220 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-05 00:16:21.769680 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:16:21.769724 | orchestrator | 2026-03-05 00:16:21.769732 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-05 00:16:24.778321 | orchestrator | changed: [testbed-manager] 2026-03-05 00:16:24.778399 | orchestrator | 2026-03-05 00:16:24.778414 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-05 00:16:24.816063 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:16:24.816179 | orchestrator | 2026-03-05 00:16:24.816195 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-05 00:17:59.276889 | orchestrator | changed: [testbed-manager] 2026-03-05 00:17:59.276991 | orchestrator | 2026-03-05 00:17:59.277012 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-05 00:18:00.271105 | orchestrator | ok: [testbed-manager] 2026-03-05 00:18:00.271198 | orchestrator | 2026-03-05 00:18:00.271215 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:18:00.271228 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-05 00:18:00.271240 | orchestrator | 2026-03-05 00:18:00.445955 | orchestrator | ok: Runtime: 0:02:16.866870 2026-03-05 00:18:00.458606 | 2026-03-05 00:18:00.458747 | TASK [Reboot manager] 2026-03-05 00:18:01.996900 | orchestrator | ok: Runtime: 0:00:00.942303 2026-03-05 00:18:02.013205 | 2026-03-05 00:18:02.013346 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-05 00:18:16.061298 | orchestrator | ok 2026-03-05 00:18:16.071043 | 2026-03-05 00:18:16.071178 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-05 00:19:16.110923 | orchestrator | ok 2026-03-05 00:19:16.121490 | 2026-03-05 00:19:16.121624 | TASK [Deploy manager + bootstrap nodes] 2026-03-05 00:19:18.503367 | orchestrator | 2026-03-05 00:19:18.503657 | orchestrator | # DEPLOY MANAGER 2026-03-05 00:19:18.503723 | orchestrator | 2026-03-05 00:19:18.503741 | orchestrator | + set -e 2026-03-05 00:19:18.503755 | orchestrator | + echo 2026-03-05 00:19:18.503770 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-05 00:19:18.503788 | orchestrator | + echo 2026-03-05 00:19:18.503843 | orchestrator | + cat /opt/manager-vars.sh 2026-03-05 00:19:18.506275 | orchestrator | export NUMBER_OF_NODES=6 2026-03-05 00:19:18.506323 | orchestrator | 2026-03-05 00:19:18.506339 | orchestrator | export CEPH_VERSION=reef 2026-03-05 00:19:18.506361 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-05 00:19:18.506382 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-05 00:19:18.506418 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-05 00:19:18.506438 | orchestrator | 2026-03-05 00:19:18.506469 | orchestrator | export ARA=false 2026-03-05 00:19:18.506488 | orchestrator | export DEPLOY_MODE=manager 2026-03-05 00:19:18.506517 | orchestrator | export TEMPEST=true 2026-03-05 00:19:18.506537 | orchestrator | export IS_ZUUL=true 2026-03-05 00:19:18.506556 | orchestrator | 2026-03-05 00:19:18.506585 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.20 2026-03-05 00:19:18.506603 | orchestrator | export EXTERNAL_API=false 2026-03-05 00:19:18.506619 | orchestrator | 2026-03-05 00:19:18.506637 | orchestrator | export IMAGE_USER=ubuntu 2026-03-05 00:19:18.506663 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-05 00:19:18.506681 | orchestrator | 2026-03-05 00:19:18.506742 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-05 00:19:18.506773 | orchestrator | 2026-03-05 00:19:18.506793 | orchestrator | + echo 2026-03-05 00:19:18.506820 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-05 00:19:18.507913 | orchestrator | ++ export INTERACTIVE=false 2026-03-05 00:19:18.508013 | orchestrator | ++ INTERACTIVE=false 2026-03-05 00:19:18.508033 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-05 00:19:18.508050 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-05 00:19:18.508062 | orchestrator | + source /opt/manager-vars.sh 2026-03-05 00:19:18.508073 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-05 00:19:18.508084 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-05 00:19:18.508095 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-05 00:19:18.508106 | orchestrator | ++ CEPH_VERSION=reef 2026-03-05 00:19:18.508117 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-05 00:19:18.508131 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-05 00:19:18.508153 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-05 00:19:18.508165 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-05 00:19:18.508176 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-05 00:19:18.508203 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-05 00:19:18.508215 | orchestrator | ++ export ARA=false 2026-03-05 00:19:18.508226 | orchestrator | ++ ARA=false 2026-03-05 00:19:18.508237 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-05 00:19:18.508366 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-05 00:19:18.508385 | orchestrator | ++ export TEMPEST=true 2026-03-05 00:19:18.508397 | orchestrator | ++ TEMPEST=true 2026-03-05 00:19:18.508409 | orchestrator | ++ export IS_ZUUL=true 2026-03-05 00:19:18.508421 | orchestrator | ++ IS_ZUUL=true 2026-03-05 00:19:18.508433 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.20 2026-03-05 00:19:18.508444 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.20 2026-03-05 00:19:18.508456 | orchestrator | ++ export EXTERNAL_API=false 2026-03-05 00:19:18.508468 | orchestrator | ++ EXTERNAL_API=false 2026-03-05 00:19:18.508479 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-05 00:19:18.508490 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-05 00:19:18.508502 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-05 00:19:18.508514 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-05 00:19:18.508525 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-05 00:19:18.508537 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-05 00:19:18.508549 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-05 00:19:18.559058 | orchestrator | + docker version 2026-03-05 00:19:18.660324 | orchestrator | Client: Docker Engine - Community 2026-03-05 00:19:18.660465 | orchestrator | Version: 27.5.1 2026-03-05 00:19:18.660482 | orchestrator | API version: 1.47 2026-03-05 00:19:18.660497 | orchestrator | Go version: go1.22.11 2026-03-05 00:19:18.660509 | orchestrator | Git commit: 9f9e405 2026-03-05 00:19:18.660527 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-05 00:19:18.660539 | orchestrator | OS/Arch: linux/amd64 2026-03-05 00:19:18.660550 | orchestrator | Context: default 2026-03-05 00:19:18.660562 | orchestrator | 2026-03-05 00:19:18.660573 | orchestrator | Server: Docker Engine - Community 2026-03-05 00:19:18.660585 | orchestrator | Engine: 2026-03-05 00:19:18.660609 | orchestrator | Version: 27.5.1 2026-03-05 00:19:18.660621 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-05 00:19:18.660667 | orchestrator | Go version: go1.22.11 2026-03-05 00:19:18.660679 | orchestrator | Git commit: 4c9b3b0 2026-03-05 00:19:18.660880 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-05 00:19:18.660898 | orchestrator | OS/Arch: linux/amd64 2026-03-05 00:19:18.660909 | orchestrator | Experimental: false 2026-03-05 00:19:18.660921 | orchestrator | containerd: 2026-03-05 00:19:18.660931 | orchestrator | Version: v2.2.1 2026-03-05 00:19:18.660943 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-05 00:19:18.660955 | orchestrator | runc: 2026-03-05 00:19:18.660966 | orchestrator | Version: 1.3.4 2026-03-05 00:19:18.660977 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-05 00:19:18.661000 | orchestrator | docker-init: 2026-03-05 00:19:18.661011 | orchestrator | Version: 0.19.0 2026-03-05 00:19:18.661023 | orchestrator | GitCommit: de40ad0 2026-03-05 00:19:18.663793 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-05 00:19:18.672882 | orchestrator | + set -e 2026-03-05 00:19:18.672966 | orchestrator | + source /opt/manager-vars.sh 2026-03-05 00:19:18.672979 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-05 00:19:18.672991 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-05 00:19:18.673001 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-05 00:19:18.673011 | orchestrator | ++ CEPH_VERSION=reef 2026-03-05 00:19:18.673021 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-05 00:19:18.673032 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-05 00:19:18.673042 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-05 00:19:18.673052 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-05 00:19:18.673062 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-05 00:19:18.673071 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-05 00:19:18.673081 | orchestrator | ++ export ARA=false 2026-03-05 00:19:18.673091 | orchestrator | ++ ARA=false 2026-03-05 00:19:18.673101 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-05 00:19:18.673111 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-05 00:19:18.673120 | orchestrator | ++ export TEMPEST=true 2026-03-05 00:19:18.673130 | orchestrator | ++ TEMPEST=true 2026-03-05 00:19:18.673139 | orchestrator | ++ export IS_ZUUL=true 2026-03-05 00:19:18.673149 | orchestrator | ++ IS_ZUUL=true 2026-03-05 00:19:18.673159 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.20 2026-03-05 00:19:18.673168 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.20 2026-03-05 00:19:18.673178 | orchestrator | ++ export EXTERNAL_API=false 2026-03-05 00:19:18.673187 | orchestrator | ++ EXTERNAL_API=false 2026-03-05 00:19:18.673197 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-05 00:19:18.673207 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-05 00:19:18.673216 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-05 00:19:18.673236 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-05 00:19:18.673246 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-05 00:19:18.673256 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-05 00:19:18.673266 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-05 00:19:18.673275 | orchestrator | ++ export INTERACTIVE=false 2026-03-05 00:19:18.673285 | orchestrator | ++ INTERACTIVE=false 2026-03-05 00:19:18.673294 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-05 00:19:18.673308 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-05 00:19:18.673318 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-05 00:19:18.673328 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-05 00:19:18.679900 | orchestrator | + set -e 2026-03-05 00:19:18.680405 | orchestrator | + VERSION=9.5.0 2026-03-05 00:19:18.680456 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-05 00:19:18.686869 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-05 00:19:18.686911 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-05 00:19:18.691169 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-05 00:19:18.694530 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-05 00:19:18.702117 | orchestrator | /opt/configuration ~ 2026-03-05 00:19:18.702198 | orchestrator | + set -e 2026-03-05 00:19:18.702212 | orchestrator | + pushd /opt/configuration 2026-03-05 00:19:18.702224 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-05 00:19:18.703222 | orchestrator | + source /opt/venv/bin/activate 2026-03-05 00:19:18.703959 | orchestrator | ++ deactivate nondestructive 2026-03-05 00:19:18.704075 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:19:18.704104 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:19:18.704176 | orchestrator | ++ hash -r 2026-03-05 00:19:18.704206 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:19:18.704218 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-05 00:19:18.704229 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-05 00:19:18.704240 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-05 00:19:18.704405 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-05 00:19:18.704441 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-05 00:19:18.704468 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-05 00:19:18.704487 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-05 00:19:18.704514 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-05 00:19:18.704550 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-05 00:19:18.704569 | orchestrator | ++ export PATH 2026-03-05 00:19:18.704586 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:19:18.704598 | orchestrator | ++ '[' -z '' ']' 2026-03-05 00:19:18.704608 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-05 00:19:18.704619 | orchestrator | ++ PS1='(venv) ' 2026-03-05 00:19:18.704631 | orchestrator | ++ export PS1 2026-03-05 00:19:18.704641 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-05 00:19:18.704653 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-05 00:19:18.704664 | orchestrator | ++ hash -r 2026-03-05 00:19:18.704778 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-05 00:19:19.557105 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-05 00:19:19.557595 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-05 00:19:19.558976 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-05 00:19:19.560068 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-05 00:19:19.560999 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-05 00:19:19.570246 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-05 00:19:19.571442 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-05 00:19:19.572496 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-05 00:19:19.573756 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-05 00:19:19.595112 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-03-05 00:19:19.596269 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-05 00:19:19.597760 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-05 00:19:19.598838 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-05 00:19:19.602421 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-05 00:19:19.763142 | orchestrator | ++ which gilt 2026-03-05 00:19:19.765150 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-05 00:19:19.765224 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-05 00:19:19.965761 | orchestrator | osism.cfg-generics: 2026-03-05 00:19:20.087165 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-05 00:19:20.087612 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-05 00:19:20.088496 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-05 00:19:20.088535 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-05 00:19:20.865439 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-05 00:19:20.873837 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-05 00:19:21.154325 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-05 00:19:21.188146 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-05 00:19:21.188242 | orchestrator | + deactivate 2026-03-05 00:19:21.188265 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-05 00:19:21.188286 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-05 00:19:21.188317 | orchestrator | + export PATH 2026-03-05 00:19:21.188336 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-05 00:19:21.188352 | orchestrator | + '[' -n '' ']' 2026-03-05 00:19:21.188366 | orchestrator | + hash -r 2026-03-05 00:19:21.188377 | orchestrator | + '[' -n '' ']' 2026-03-05 00:19:21.188388 | orchestrator | + unset VIRTUAL_ENV 2026-03-05 00:19:21.188399 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-05 00:19:21.188410 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-05 00:19:21.188421 | orchestrator | + unset -f deactivate 2026-03-05 00:19:21.188445 | orchestrator | ~ 2026-03-05 00:19:21.188458 | orchestrator | + popd 2026-03-05 00:19:21.189970 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-05 00:19:21.189993 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-05 00:19:21.191018 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-05 00:19:21.236896 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-05 00:19:21.236991 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-05 00:19:21.237918 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-05 00:19:21.290355 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-05 00:19:21.291065 | orchestrator | ++ semver 2024.2 2025.1 2026-03-05 00:19:21.343159 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-05 00:19:21.343265 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-05 00:19:21.419803 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-05 00:19:21.419927 | orchestrator | + source /opt/venv/bin/activate 2026-03-05 00:19:21.419953 | orchestrator | ++ deactivate nondestructive 2026-03-05 00:19:21.419976 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:19:21.419996 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:19:21.420016 | orchestrator | ++ hash -r 2026-03-05 00:19:21.420035 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:19:21.420052 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-05 00:19:21.420070 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-05 00:19:21.420115 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-05 00:19:21.420137 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-05 00:19:21.420157 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-05 00:19:21.420176 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-05 00:19:21.420194 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-05 00:19:21.420213 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-05 00:19:21.420258 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-05 00:19:21.420277 | orchestrator | ++ export PATH 2026-03-05 00:19:21.420295 | orchestrator | ++ '[' -n '' ']' 2026-03-05 00:19:21.420313 | orchestrator | ++ '[' -z '' ']' 2026-03-05 00:19:21.420332 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-05 00:19:21.420349 | orchestrator | ++ PS1='(venv) ' 2026-03-05 00:19:21.420368 | orchestrator | ++ export PS1 2026-03-05 00:19:21.420388 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-05 00:19:21.420406 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-05 00:19:21.420425 | orchestrator | ++ hash -r 2026-03-05 00:19:21.420436 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-05 00:19:22.346563 | orchestrator | 2026-03-05 00:19:22.346787 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-05 00:19:22.346820 | orchestrator | 2026-03-05 00:19:22.346838 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-05 00:19:22.823412 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:22.823519 | orchestrator | 2026-03-05 00:19:22.823534 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-05 00:19:23.646097 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:23.646229 | orchestrator | 2026-03-05 00:19:23.646254 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-05 00:19:23.646313 | orchestrator | 2026-03-05 00:19:23.646337 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:19:25.579788 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:25.579890 | orchestrator | 2026-03-05 00:19:25.579907 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-05 00:19:25.623493 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:25.623578 | orchestrator | 2026-03-05 00:19:25.623594 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-05 00:19:26.015576 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:26.015732 | orchestrator | 2026-03-05 00:19:26.015781 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-05 00:19:26.044873 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:19:26.044962 | orchestrator | 2026-03-05 00:19:26.044976 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-05 00:19:26.349256 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:26.349382 | orchestrator | 2026-03-05 00:19:26.349400 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-05 00:19:26.649215 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:26.649381 | orchestrator | 2026-03-05 00:19:26.649399 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-05 00:19:26.766398 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:19:26.766513 | orchestrator | 2026-03-05 00:19:26.766528 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-05 00:19:26.766540 | orchestrator | 2026-03-05 00:19:26.766551 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:19:28.335499 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:28.335585 | orchestrator | 2026-03-05 00:19:28.335596 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-05 00:19:28.433478 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-05 00:19:28.433584 | orchestrator | 2026-03-05 00:19:28.433604 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-05 00:19:28.483756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-05 00:19:28.483845 | orchestrator | 2026-03-05 00:19:28.483858 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-05 00:19:29.524718 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-05 00:19:29.524820 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-05 00:19:29.524836 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-05 00:19:29.524848 | orchestrator | 2026-03-05 00:19:29.524863 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-05 00:19:31.243486 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-05 00:19:31.243607 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-05 00:19:31.243622 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-05 00:19:31.243635 | orchestrator | 2026-03-05 00:19:31.244501 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-05 00:19:31.855815 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:19:31.855939 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:31.855965 | orchestrator | 2026-03-05 00:19:31.855985 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-05 00:19:32.453851 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:19:32.453987 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:32.454092 | orchestrator | 2026-03-05 00:19:32.454121 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-05 00:19:32.508411 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:19:32.508523 | orchestrator | 2026-03-05 00:19:32.508550 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-05 00:19:32.842906 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:32.843008 | orchestrator | 2026-03-05 00:19:32.843024 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-05 00:19:32.902281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-05 00:19:32.902371 | orchestrator | 2026-03-05 00:19:32.902384 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-05 00:19:33.941726 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:33.941816 | orchestrator | 2026-03-05 00:19:33.941831 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-05 00:19:34.720098 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:34.720184 | orchestrator | 2026-03-05 00:19:34.720191 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-05 00:19:44.991438 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:44.991552 | orchestrator | 2026-03-05 00:19:44.991579 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-05 00:19:45.050601 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:19:45.050722 | orchestrator | 2026-03-05 00:19:45.050769 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-05 00:19:45.050787 | orchestrator | 2026-03-05 00:19:45.050804 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:19:46.776261 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:46.776362 | orchestrator | 2026-03-05 00:19:46.776379 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-05 00:19:46.879424 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-05 00:19:46.879522 | orchestrator | 2026-03-05 00:19:46.879537 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-05 00:19:46.934349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:19:46.934445 | orchestrator | 2026-03-05 00:19:46.934461 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-05 00:19:49.223295 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:49.223374 | orchestrator | 2026-03-05 00:19:49.223384 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-05 00:19:49.269605 | orchestrator | ok: [testbed-manager] 2026-03-05 00:19:49.269722 | orchestrator | 2026-03-05 00:19:49.269734 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-05 00:19:49.390991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-05 00:19:49.391113 | orchestrator | 2026-03-05 00:19:49.391139 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-05 00:19:52.094143 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-05 00:19:52.094264 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-05 00:19:52.094280 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-05 00:19:52.094292 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-05 00:19:52.094304 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-05 00:19:52.094315 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-05 00:19:52.094326 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-05 00:19:52.094336 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-05 00:19:52.094347 | orchestrator | 2026-03-05 00:19:52.094359 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-05 00:19:52.700333 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:52.700431 | orchestrator | 2026-03-05 00:19:52.700451 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-05 00:19:53.322534 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:53.322638 | orchestrator | 2026-03-05 00:19:53.322713 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-05 00:19:53.400636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-05 00:19:53.400777 | orchestrator | 2026-03-05 00:19:53.400796 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-05 00:19:54.601379 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-05 00:19:54.601483 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-05 00:19:54.601500 | orchestrator | 2026-03-05 00:19:54.601513 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-05 00:19:55.206931 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:55.207048 | orchestrator | 2026-03-05 00:19:55.207066 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-05 00:19:55.264822 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:19:55.264916 | orchestrator | 2026-03-05 00:19:55.264931 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-05 00:19:55.347238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-05 00:19:55.347346 | orchestrator | 2026-03-05 00:19:55.347362 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-05 00:19:55.940534 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:55.940702 | orchestrator | 2026-03-05 00:19:55.940734 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-05 00:19:56.008083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-05 00:19:56.008209 | orchestrator | 2026-03-05 00:19:56.008241 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-05 00:19:57.312301 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:19:57.312411 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:19:57.312427 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:57.312440 | orchestrator | 2026-03-05 00:19:57.312453 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-05 00:19:57.921563 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:57.921715 | orchestrator | 2026-03-05 00:19:57.921734 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-05 00:19:57.979264 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:19:57.979361 | orchestrator | 2026-03-05 00:19:57.979375 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-05 00:19:58.077406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-05 00:19:58.077507 | orchestrator | 2026-03-05 00:19:58.077522 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-05 00:19:58.623978 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:58.624077 | orchestrator | 2026-03-05 00:19:58.624093 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-05 00:19:58.995427 | orchestrator | changed: [testbed-manager] 2026-03-05 00:19:58.995541 | orchestrator | 2026-03-05 00:19:58.995558 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-05 00:20:00.127760 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-05 00:20:00.127875 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-05 00:20:00.127890 | orchestrator | 2026-03-05 00:20:00.127902 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-05 00:20:00.753486 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:00.753585 | orchestrator | 2026-03-05 00:20:00.753603 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-05 00:20:01.127597 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:01.127783 | orchestrator | 2026-03-05 00:20:01.127809 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-05 00:20:01.483738 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:01.483869 | orchestrator | 2026-03-05 00:20:01.483901 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-05 00:20:01.535850 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:20:01.535939 | orchestrator | 2026-03-05 00:20:01.535953 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-05 00:20:01.603007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-05 00:20:01.603127 | orchestrator | 2026-03-05 00:20:01.603142 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-05 00:20:01.649510 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:01.649679 | orchestrator | 2026-03-05 00:20:01.649721 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-05 00:20:03.667128 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-05 00:20:03.667230 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-05 00:20:03.667246 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-05 00:20:03.667260 | orchestrator | 2026-03-05 00:20:03.667273 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-05 00:20:04.362283 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:04.362387 | orchestrator | 2026-03-05 00:20:04.362404 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-05 00:20:05.037019 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:05.037091 | orchestrator | 2026-03-05 00:20:05.037102 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-05 00:20:05.700397 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:05.700485 | orchestrator | 2026-03-05 00:20:05.700501 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-05 00:20:05.762880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-05 00:20:05.762945 | orchestrator | 2026-03-05 00:20:05.762958 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-05 00:20:05.810456 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:05.810536 | orchestrator | 2026-03-05 00:20:05.810551 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-05 00:20:06.430964 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-05 00:20:06.431040 | orchestrator | 2026-03-05 00:20:06.431055 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-05 00:20:06.513037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-05 00:20:06.513121 | orchestrator | 2026-03-05 00:20:06.513137 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-05 00:20:07.132378 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:07.132466 | orchestrator | 2026-03-05 00:20:07.132497 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-05 00:20:07.672301 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:07.672395 | orchestrator | 2026-03-05 00:20:07.672414 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-05 00:20:07.726874 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:20:07.726950 | orchestrator | 2026-03-05 00:20:07.726965 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-05 00:20:07.787303 | orchestrator | ok: [testbed-manager] 2026-03-05 00:20:07.787401 | orchestrator | 2026-03-05 00:20:07.787429 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-05 00:20:08.527317 | orchestrator | changed: [testbed-manager] 2026-03-05 00:20:08.527403 | orchestrator | 2026-03-05 00:20:08.527419 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-05 00:21:20.615663 | orchestrator | changed: [testbed-manager] 2026-03-05 00:21:20.615820 | orchestrator | 2026-03-05 00:21:20.615838 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-05 00:21:21.615090 | orchestrator | ok: [testbed-manager] 2026-03-05 00:21:21.615186 | orchestrator | 2026-03-05 00:21:21.615201 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-05 00:21:21.672825 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:21:21.672945 | orchestrator | 2026-03-05 00:21:21.672964 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-05 00:21:27.475714 | orchestrator | changed: [testbed-manager] 2026-03-05 00:21:27.475826 | orchestrator | 2026-03-05 00:21:27.475843 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-05 00:21:27.533940 | orchestrator | ok: [testbed-manager] 2026-03-05 00:21:27.534118 | orchestrator | 2026-03-05 00:21:27.534144 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-05 00:21:27.534162 | orchestrator | 2026-03-05 00:21:27.534179 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-05 00:21:27.691096 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:21:27.691197 | orchestrator | 2026-03-05 00:21:27.691215 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-05 00:22:27.744090 | orchestrator | Pausing for 60 seconds 2026-03-05 00:22:27.744200 | orchestrator | changed: [testbed-manager] 2026-03-05 00:22:27.744215 | orchestrator | 2026-03-05 00:22:27.744227 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-05 00:22:30.789942 | orchestrator | changed: [testbed-manager] 2026-03-05 00:22:30.790133 | orchestrator | 2026-03-05 00:22:30.790205 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-05 00:23:12.255948 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-05 00:23:12.256090 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-05 00:23:12.256107 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:12.256121 | orchestrator | 2026-03-05 00:23:12.256153 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-05 00:23:22.321624 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:22.321791 | orchestrator | 2026-03-05 00:23:22.321811 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-05 00:23:22.401152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-05 00:23:22.401269 | orchestrator | 2026-03-05 00:23:22.401284 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-05 00:23:22.415828 | orchestrator | 2026-03-05 00:23:22.415903 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-05 00:23:22.449848 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:23:22.449925 | orchestrator | 2026-03-05 00:23:22.449940 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-05 00:23:22.528811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-05 00:23:22.528905 | orchestrator | 2026-03-05 00:23:22.528922 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-05 00:23:23.276979 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:23.277080 | orchestrator | 2026-03-05 00:23:23.277098 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-05 00:23:26.343497 | orchestrator | ok: [testbed-manager] 2026-03-05 00:23:26.343619 | orchestrator | 2026-03-05 00:23:26.343724 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-05 00:23:26.416129 | orchestrator | ok: [testbed-manager] => { 2026-03-05 00:23:26.416225 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-05 00:23:26.416242 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-05 00:23:26.416253 | orchestrator | "Checking running containers against expected versions...", 2026-03-05 00:23:26.416266 | orchestrator | "", 2026-03-05 00:23:26.416277 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-05 00:23:26.416289 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-05 00:23:26.416301 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.416313 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-05 00:23:26.416324 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.416335 | orchestrator | "", 2026-03-05 00:23:26.416353 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-05 00:23:26.416372 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-05 00:23:26.416403 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.416455 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-05 00:23:26.416475 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.416492 | orchestrator | "", 2026-03-05 00:23:26.416510 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-05 00:23:26.416529 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-05 00:23:26.416547 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.416566 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-05 00:23:26.416585 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.416603 | orchestrator | "", 2026-03-05 00:23:26.416622 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-05 00:23:26.416635 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-05 00:23:26.416648 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.416702 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-05 00:23:26.416716 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.416728 | orchestrator | "", 2026-03-05 00:23:26.416741 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-05 00:23:26.416757 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-05 00:23:26.416769 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.416782 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-05 00:23:26.416794 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.416806 | orchestrator | "", 2026-03-05 00:23:26.416819 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-05 00:23:26.416832 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.416845 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.416857 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.416870 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.416882 | orchestrator | "", 2026-03-05 00:23:26.416895 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-05 00:23:26.416908 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-05 00:23:26.416920 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.416933 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-05 00:23:26.416946 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.416959 | orchestrator | "", 2026-03-05 00:23:26.416973 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-05 00:23:26.416984 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-05 00:23:26.416995 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.417005 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-05 00:23:26.417016 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.417027 | orchestrator | "", 2026-03-05 00:23:26.417038 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-05 00:23:26.417049 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-05 00:23:26.417060 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.417070 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-05 00:23:26.417081 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.417092 | orchestrator | "", 2026-03-05 00:23:26.417103 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-05 00:23:26.417114 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-05 00:23:26.417125 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.417136 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-05 00:23:26.417147 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.417157 | orchestrator | "", 2026-03-05 00:23:26.417168 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-05 00:23:26.417179 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417190 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.417211 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417222 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.417233 | orchestrator | "", 2026-03-05 00:23:26.417244 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-05 00:23:26.417255 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417266 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.417277 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417287 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.417298 | orchestrator | "", 2026-03-05 00:23:26.417310 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-05 00:23:26.417321 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417332 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.417343 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417353 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.417364 | orchestrator | "", 2026-03-05 00:23:26.417375 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-05 00:23:26.417386 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417397 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.417408 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417438 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.417449 | orchestrator | "", 2026-03-05 00:23:26.417460 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-05 00:23:26.417471 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417482 | orchestrator | " Enabled: true", 2026-03-05 00:23:26.417502 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-05 00:23:26.417513 | orchestrator | " Status: ✅ MATCH", 2026-03-05 00:23:26.417524 | orchestrator | "", 2026-03-05 00:23:26.417535 | orchestrator | "=== Summary ===", 2026-03-05 00:23:26.417546 | orchestrator | "Errors (version mismatches): 0", 2026-03-05 00:23:26.417558 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-05 00:23:26.417578 | orchestrator | "", 2026-03-05 00:23:26.417597 | orchestrator | "✅ All running containers match expected versions!" 2026-03-05 00:23:26.417617 | orchestrator | ] 2026-03-05 00:23:26.417636 | orchestrator | } 2026-03-05 00:23:26.417679 | orchestrator | 2026-03-05 00:23:26.417700 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-05 00:23:26.462492 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:23:26.462583 | orchestrator | 2026-03-05 00:23:26.462597 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:23:26.462611 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-05 00:23:26.462632 | orchestrator | 2026-03-05 00:23:26.559118 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-05 00:23:26.559211 | orchestrator | + deactivate 2026-03-05 00:23:26.559227 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-05 00:23:26.559240 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-05 00:23:26.559251 | orchestrator | + export PATH 2026-03-05 00:23:26.559263 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-05 00:23:26.559274 | orchestrator | + '[' -n '' ']' 2026-03-05 00:23:26.559286 | orchestrator | + hash -r 2026-03-05 00:23:26.559297 | orchestrator | + '[' -n '' ']' 2026-03-05 00:23:26.559307 | orchestrator | + unset VIRTUAL_ENV 2026-03-05 00:23:26.559318 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-05 00:23:26.559329 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-05 00:23:26.559340 | orchestrator | + unset -f deactivate 2026-03-05 00:23:26.559352 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-05 00:23:26.567233 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-05 00:23:26.567274 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-05 00:23:26.567286 | orchestrator | + local max_attempts=60 2026-03-05 00:23:26.567297 | orchestrator | + local name=ceph-ansible 2026-03-05 00:23:26.567337 | orchestrator | + local attempt_num=1 2026-03-05 00:23:26.568170 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:23:26.608087 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:23:26.608170 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-05 00:23:26.608184 | orchestrator | + local max_attempts=60 2026-03-05 00:23:26.608196 | orchestrator | + local name=kolla-ansible 2026-03-05 00:23:26.608208 | orchestrator | + local attempt_num=1 2026-03-05 00:23:26.608503 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-05 00:23:26.646882 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:23:26.646963 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-05 00:23:26.646976 | orchestrator | + local max_attempts=60 2026-03-05 00:23:26.646988 | orchestrator | + local name=osism-ansible 2026-03-05 00:23:26.646999 | orchestrator | + local attempt_num=1 2026-03-05 00:23:26.647487 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-05 00:23:26.683222 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:23:26.683303 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-05 00:23:26.683316 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-05 00:23:27.307531 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-05 00:23:27.479186 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-05 00:23:27.479283 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-05 00:23:27.479299 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-05 00:23:27.479310 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-05 00:23:27.479323 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-03-05 00:23:27.479356 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-03-05 00:23:27.479367 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-03-05 00:23:27.479378 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 57 seconds (healthy) 2026-03-05 00:23:27.479389 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-03-05 00:23:27.479400 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-03-05 00:23:27.479411 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-03-05 00:23:27.479422 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-03-05 00:23:27.479433 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-05 00:23:27.479462 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-05 00:23:27.479474 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-05 00:23:27.479485 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-03-05 00:23:27.483879 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-05 00:23:27.531846 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-05 00:23:27.531927 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-05 00:23:27.536493 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-05 00:23:39.780659 | orchestrator | 2026-03-05 00:23:39 | INFO  | Task f706c0af-dc05-4c1b-9c26-098b0f82b8b2 (resolvconf) was prepared for execution. 2026-03-05 00:23:39.780770 | orchestrator | 2026-03-05 00:23:39 | INFO  | It takes a moment until task f706c0af-dc05-4c1b-9c26-098b0f82b8b2 (resolvconf) has been started and output is visible here. 2026-03-05 00:23:52.437083 | orchestrator | 2026-03-05 00:23:52.437229 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-05 00:23:52.437258 | orchestrator | 2026-03-05 00:23:52.437278 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:23:52.437296 | orchestrator | Thursday 05 March 2026 00:23:43 +0000 (0:00:00.100) 0:00:00.100 ******** 2026-03-05 00:23:52.437314 | orchestrator | ok: [testbed-manager] 2026-03-05 00:23:52.437335 | orchestrator | 2026-03-05 00:23:52.437350 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-05 00:23:52.437362 | orchestrator | Thursday 05 March 2026 00:23:46 +0000 (0:00:03.334) 0:00:03.435 ******** 2026-03-05 00:23:52.437373 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:23:52.437385 | orchestrator | 2026-03-05 00:23:52.437396 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-05 00:23:52.437407 | orchestrator | Thursday 05 March 2026 00:23:46 +0000 (0:00:00.052) 0:00:03.487 ******** 2026-03-05 00:23:52.437418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-05 00:23:52.437430 | orchestrator | 2026-03-05 00:23:52.437441 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-05 00:23:52.437452 | orchestrator | Thursday 05 March 2026 00:23:46 +0000 (0:00:00.076) 0:00:03.563 ******** 2026-03-05 00:23:52.437484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:23:52.437495 | orchestrator | 2026-03-05 00:23:52.437506 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-05 00:23:52.437517 | orchestrator | Thursday 05 March 2026 00:23:46 +0000 (0:00:00.066) 0:00:03.630 ******** 2026-03-05 00:23:52.437528 | orchestrator | ok: [testbed-manager] 2026-03-05 00:23:52.437538 | orchestrator | 2026-03-05 00:23:52.437549 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-05 00:23:52.437560 | orchestrator | Thursday 05 March 2026 00:23:48 +0000 (0:00:01.009) 0:00:04.640 ******** 2026-03-05 00:23:52.437573 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:23:52.437586 | orchestrator | 2026-03-05 00:23:52.437598 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-05 00:23:52.437612 | orchestrator | Thursday 05 March 2026 00:23:48 +0000 (0:00:00.057) 0:00:04.697 ******** 2026-03-05 00:23:52.437624 | orchestrator | ok: [testbed-manager] 2026-03-05 00:23:52.437664 | orchestrator | 2026-03-05 00:23:52.437684 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-05 00:23:52.437757 | orchestrator | Thursday 05 March 2026 00:23:48 +0000 (0:00:00.495) 0:00:05.192 ******** 2026-03-05 00:23:52.437779 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:23:52.437798 | orchestrator | 2026-03-05 00:23:52.437818 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-05 00:23:52.437840 | orchestrator | Thursday 05 March 2026 00:23:48 +0000 (0:00:00.059) 0:00:05.252 ******** 2026-03-05 00:23:52.437860 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:52.437881 | orchestrator | 2026-03-05 00:23:52.437902 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-05 00:23:52.437924 | orchestrator | Thursday 05 March 2026 00:23:49 +0000 (0:00:00.453) 0:00:05.705 ******** 2026-03-05 00:23:52.437945 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:52.437964 | orchestrator | 2026-03-05 00:23:52.437984 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-05 00:23:52.438002 | orchestrator | Thursday 05 March 2026 00:23:50 +0000 (0:00:01.019) 0:00:06.725 ******** 2026-03-05 00:23:52.438097 | orchestrator | ok: [testbed-manager] 2026-03-05 00:23:52.438125 | orchestrator | 2026-03-05 00:23:52.438139 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-05 00:23:52.438150 | orchestrator | Thursday 05 March 2026 00:23:51 +0000 (0:00:00.940) 0:00:07.665 ******** 2026-03-05 00:23:52.438161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-05 00:23:52.438172 | orchestrator | 2026-03-05 00:23:52.438183 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-05 00:23:52.438194 | orchestrator | Thursday 05 March 2026 00:23:51 +0000 (0:00:00.074) 0:00:07.740 ******** 2026-03-05 00:23:52.438205 | orchestrator | changed: [testbed-manager] 2026-03-05 00:23:52.438215 | orchestrator | 2026-03-05 00:23:52.438226 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:23:52.438238 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:23:52.438249 | orchestrator | 2026-03-05 00:23:52.438260 | orchestrator | 2026-03-05 00:23:52.438270 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:23:52.438281 | orchestrator | Thursday 05 March 2026 00:23:52 +0000 (0:00:01.119) 0:00:08.860 ******** 2026-03-05 00:23:52.438292 | orchestrator | =============================================================================== 2026-03-05 00:23:52.438302 | orchestrator | Gathering Facts --------------------------------------------------------- 3.33s 2026-03-05 00:23:52.438313 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2026-03-05 00:23:52.438324 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.02s 2026-03-05 00:23:52.438334 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.01s 2026-03-05 00:23:52.438345 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2026-03-05 00:23:52.438356 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-03-05 00:23:52.438389 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.45s 2026-03-05 00:23:52.438400 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-05 00:23:52.438411 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-03-05 00:23:52.438422 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-05 00:23:52.438432 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2026-03-05 00:23:52.438443 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-05 00:23:52.438466 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-03-05 00:23:52.691624 | orchestrator | + osism apply sshconfig 2026-03-05 00:24:04.702280 | orchestrator | 2026-03-05 00:24:04 | INFO  | Task a97b5f54-15da-4973-9f9d-bdb19cb927f6 (sshconfig) was prepared for execution. 2026-03-05 00:24:04.702391 | orchestrator | 2026-03-05 00:24:04 | INFO  | It takes a moment until task a97b5f54-15da-4973-9f9d-bdb19cb927f6 (sshconfig) has been started and output is visible here. 2026-03-05 00:24:16.062284 | orchestrator | 2026-03-05 00:24:16.062383 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-05 00:24:16.062400 | orchestrator | 2026-03-05 00:24:16.062435 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-05 00:24:16.062448 | orchestrator | Thursday 05 March 2026 00:24:08 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-03-05 00:24:16.062456 | orchestrator | ok: [testbed-manager] 2026-03-05 00:24:16.062463 | orchestrator | 2026-03-05 00:24:16.062470 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-05 00:24:16.062477 | orchestrator | Thursday 05 March 2026 00:24:09 +0000 (0:00:00.549) 0:00:00.715 ******** 2026-03-05 00:24:16.062483 | orchestrator | changed: [testbed-manager] 2026-03-05 00:24:16.062491 | orchestrator | 2026-03-05 00:24:16.062497 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-05 00:24:16.062504 | orchestrator | Thursday 05 March 2026 00:24:09 +0000 (0:00:00.513) 0:00:01.229 ******** 2026-03-05 00:24:16.062510 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-05 00:24:16.062517 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-05 00:24:16.062523 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-05 00:24:16.062530 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-05 00:24:16.062536 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-05 00:24:16.062542 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-05 00:24:16.062548 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-05 00:24:16.062555 | orchestrator | 2026-03-05 00:24:16.062561 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-05 00:24:16.062567 | orchestrator | Thursday 05 March 2026 00:24:15 +0000 (0:00:05.446) 0:00:06.676 ******** 2026-03-05 00:24:16.062573 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:24:16.062579 | orchestrator | 2026-03-05 00:24:16.062586 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-05 00:24:16.062592 | orchestrator | Thursday 05 March 2026 00:24:15 +0000 (0:00:00.074) 0:00:06.751 ******** 2026-03-05 00:24:16.062598 | orchestrator | changed: [testbed-manager] 2026-03-05 00:24:16.062604 | orchestrator | 2026-03-05 00:24:16.062611 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:24:16.062618 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:24:16.062625 | orchestrator | 2026-03-05 00:24:16.062631 | orchestrator | 2026-03-05 00:24:16.062637 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:24:16.062643 | orchestrator | Thursday 05 March 2026 00:24:15 +0000 (0:00:00.537) 0:00:07.288 ******** 2026-03-05 00:24:16.062650 | orchestrator | =============================================================================== 2026-03-05 00:24:16.062656 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.45s 2026-03-05 00:24:16.062662 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2026-03-05 00:24:16.062668 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-03-05 00:24:16.062674 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2026-03-05 00:24:16.062681 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-05 00:24:16.307317 | orchestrator | + osism apply known-hosts 2026-03-05 00:24:28.155652 | orchestrator | 2026-03-05 00:24:28 | INFO  | Task c093c0c3-c12d-4dc1-8f1e-bfc2a3e48dc3 (known-hosts) was prepared for execution. 2026-03-05 00:24:28.155811 | orchestrator | 2026-03-05 00:24:28 | INFO  | It takes a moment until task c093c0c3-c12d-4dc1-8f1e-bfc2a3e48dc3 (known-hosts) has been started and output is visible here. 2026-03-05 00:24:46.420647 | orchestrator | 2026-03-05 00:24:46.420799 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-05 00:24:46.420815 | orchestrator | 2026-03-05 00:24:46.420824 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-05 00:24:46.420834 | orchestrator | Thursday 05 March 2026 00:24:32 +0000 (0:00:00.161) 0:00:00.161 ******** 2026-03-05 00:24:46.420842 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-05 00:24:46.420851 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-05 00:24:46.420859 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-05 00:24:46.420867 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-05 00:24:46.420875 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-05 00:24:46.420883 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-05 00:24:46.420891 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-05 00:24:46.420899 | orchestrator | 2026-03-05 00:24:46.420907 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-05 00:24:46.420916 | orchestrator | Thursday 05 March 2026 00:24:38 +0000 (0:00:05.895) 0:00:06.057 ******** 2026-03-05 00:24:46.420925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-05 00:24:46.420936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-05 00:24:46.420944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-05 00:24:46.420952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-05 00:24:46.420959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-05 00:24:46.420977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-05 00:24:46.420985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-05 00:24:46.420993 | orchestrator | 2026-03-05 00:24:46.421001 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:46.421009 | orchestrator | Thursday 05 March 2026 00:24:38 +0000 (0:00:00.161) 0:00:06.219 ******** 2026-03-05 00:24:46.421017 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID+z5lpFx00VXCdCPNU9e6ohrZNkk0yg7Eg7TI/dIsZY) 2026-03-05 00:24:46.421033 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCa8QeblgONtOR/8FirSE6W/GAZMrFSBB/cWXfK9dInpwjo0Qu4QwjsK404r1lp6JgbR0VwqMyYaBlhSQzeERWLsIwt8wB57m0YQYN5soC0vvPGPo4VyiWZbmFPBFM9DdsmkrWSqLBifbxtbEyGDqnsBY1+HPmQU0n9xaOVQcWNGwUpTf8eaasZDsjs8qSD4r7zVWURhzNvU9+EhgzFsXEmRWqNfF1MUeRPiRHbC6xBayRGI87qAcp5VGyQG1pBLsLgyHSqL3Vk4NsZW2nuv3eGRvPhG4e2QBHgdpzW0ECyxXrXHiDuvkxQgTMGg2RQAHHIHBiaLM3DNJAjSNA8cN5ZI1ECFRi7/CCzoIE4W8qUacLMpZQYJidw2PUwT4hcyDqXqzRf2Bd+g2xtn8jjZgvPAKTsq6qeJriLxI1S5o9/DVxnBdMtktBe2VP6ifM3ZQV26+PqcCEKcqMEry6mxrFbPz1qgmcl3vrgfe0v8o5b8kPHcIiHfzFQL5gfKYftzZE=) 2026-03-05 00:24:46.421063 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHxavwKWPa1GZN1A37vCgBkdhbtE724y120FmUoUTzU6K4028myRR7hTUSQEb2egUGvN7BekX8Yx68gPmFFLl8E=) 2026-03-05 00:24:46.421073 | orchestrator | 2026-03-05 00:24:46.421081 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:46.421089 | orchestrator | Thursday 05 March 2026 00:24:39 +0000 (0:00:01.102) 0:00:07.321 ******** 2026-03-05 00:24:46.421112 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsvGxgcj6xDxfgWjXkUXMgknz+lSBGDej9QxmZffxvl2uRi9Qz2YH2BHkZaMHpNo0RJ4RkZTVBBqs1F7gLACAVoF+9wU6g02ylmJm8WrkhYsTBs63+x0tIEjYE/ANLFLUGim54IrzWMKpjBpQseKscC6aOatqzde2yQR3HV9h8eBgCsxv8tZKFLGkz/jlKlBzew7imw2srpzPWpz3N8jSbCe5f9JsxaAczkPICB+d4AB1/Ut0i1BV5sRg9i4oCbfrZpk96csf6VbZZaItiINm4I4l4v8s5yrzTAqdIN3nMEfJ6v7GfOcKEk/3OeINNG1R7oDin+yI5zgPgBCzD6yAclHhJ66aoHTcWo0/ND6CIiaummvDQ6KCLAturI3uq2mGumPHLczbW9AJi7dyZDvJUrsXb1B1RZb8Gvu34eJ8Z72SnWaAzdh3aFrcreq6MaN83vAXtpxZExKko7M9m+8DBIleNj6yKJ4+xIqFmjSOm/ajb0o3jbUj8aM+6Mn9Krhs=) 2026-03-05 00:24:46.421121 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOrHwDwKaekpNIMx31PlKABilrO8dllVAoVhHgOvgRwrzr+m7ZcdbC9WxnZ8shvLxOeuYRyPuGgLorX7sW4nviE=) 2026-03-05 00:24:46.421129 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOc/nVJgM8SPeB3DDsN3d9rf+Zc+M+Z0Jt7Wtt+N+goo) 2026-03-05 00:24:46.421137 | orchestrator | 2026-03-05 00:24:46.421145 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:46.421154 | orchestrator | Thursday 05 March 2026 00:24:40 +0000 (0:00:00.978) 0:00:08.299 ******** 2026-03-05 00:24:46.421165 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs+lNaaJ8x+idmY6ZFiq0KCYpEwY9eJasLY9joXQhP2fY2JUqX9ibVDv0K0H/SXzIDZy+aaAtUE670NsPmCg+AFK9LgqcwAwIN8Cz9NWWLwvhjR5db4dAahyjpLCMbGVRzxfbPdKybFWW0R8Eai+0Z3/qj3ahukdlqh2A8fvju7W6/EuK7N/GhPflJieck1d0bPqRNDsy3HC/wqSGGIsN8eYKEvFPJYIYCz2ni7+xQYlLshfsSjiE17CCygUZdhCiaBtBW9Lsc1hQS8F43NG0A8e/fx/B1X0nhEjU6yKq1vYCw5aQqxoSzyMvFSOrcTQbJL/J3gGlw+XpS0XlSlR8O2qR4076hJG6b+tiOz4BOANvX3zuNlhEduQRnTDFNOgZqhPzWhDMSP6wIXRq5NN4Vq+qcHdiY3W1LwA3f9u4TySzRKdKytkcUO15+aiF/fCRaEQIbSsb96SXXWPRL/PKvoXDP1YNGSxJhlAYXcgkF656tAKlXHSudUCUaWuLw+ws=) 2026-03-05 00:24:46.421174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPdvWLHB+QjWi630Ogtim3HAmYd3AuGAoOvUwn49+y9KiKzsr3fSOu2CgDzr0pdcYx7uzvqvk4gx6/tmXqqd+NU=) 2026-03-05 00:24:46.421183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOy8Cy7rv7P9EBbnDITVYYeMKgcBwE2veRafTYXUdfLL) 2026-03-05 00:24:46.421193 | orchestrator | 2026-03-05 00:24:46.421202 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:46.421212 | orchestrator | Thursday 05 March 2026 00:24:42 +0000 (0:00:01.955) 0:00:10.255 ******** 2026-03-05 00:24:46.421222 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnG3f+Sy8v8ilSMwWYZc7UYwu9FAr/q6zze5vrp9xL86DAoDm1MHcR1B37df3SMpBctAg4CAtkQSSDqGwIW4AEu6OFOea9V2XofodGngaeUjoWz8xU07svT3+9lIJncLScGSPBsk7PrB+iQNGZ838Dyjczz+eG1+wFE6tZIcrxEYG0GM7jbILzhAElYOalsng6LrmABjdRUGhrOJSF3UOAkEtWnIjn/VMk0v0nD3Yz0DqR8zjyohJqJ9AXYckwNpEGDN4psid2mqvfVwCwb4oZ4+AdVnEyNH3rE07iDmUr8B9s2NnhGvTYO+7wHzFKo6ENyMRIviXnirn4/TlsrKLl0aVeNNLb2Dg5Z0GXB99mRT4i+Z2mJT4DEZRmEGIY6rksVLHI0jgLpDjrrmMeWk/oYSwbBkvQ74QjIWZGxFFMjznzVQM71eI6q/XpBnXYafgupHdr1D5rFCaWTbTDwGWLWA+rTroCsUhjuvVfLVL28Y9Eipo16rotLonPTKF2mMk=) 2026-03-05 00:24:46.421231 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAej9g6JPMSylCjSHDBaJycCXFyauR836n4N4BcCP4YNLuzVAoCa6quBnY24ZMc8mZKkcMPM/B2+gXpZjVaMRgg=) 2026-03-05 00:24:46.421247 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJWN65UOhQG0q6dvFWyg+bFX62CHvtCmQzT+XXRshGTX) 2026-03-05 00:24:46.421257 | orchestrator | 2026-03-05 00:24:46.421266 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:46.421275 | orchestrator | Thursday 05 March 2026 00:24:43 +0000 (0:00:00.965) 0:00:11.220 ******** 2026-03-05 00:24:46.421347 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH1Ngdto8hjC1hxrekUkG2vxlLIzM5CQYWchYa3OmCG1o1jLvberJ1qxX+wuofQIQSeEGNbXN7Nel2E7r6ccbHHsG2wbMCGRRTwKSObuY2/LwoScx3WwSlUY4njKPJl13IQIhpiV6D5k4J4i16j4deoSfDmdxxq+5gX0Igw4BkWdcLpUAMdM/R4Bb5E7jkEeh3eHqptXbFkXGCk8LVU9cJmYIDdhmP6klZxGgElWHmcmlB6Z3RTC4BKwd7pptt59hVHnMxZCAssuNIJWSlWAVA2sGvvPsZrHduV1IUM3ZIZ1469QCiOnyYrvdXViPfLcQ/LrNEYh88M8aaTg4GEy3iT0r3JXfgOdL7im9+Vxc8yATMtVICKaw5uiBe9EcgDqxN4wH5RWMJY/HJW2eyxlj6PFeS8gwd5xUzHRmjl7/Bh/MiQeY0VyGKhrlxrgu7yASJKsjICq7m5W0s4ZrMIUIZcqu5haMe0qepwJnOiz3sxuUkYMgAOp957oO7xc1IYuc=) 2026-03-05 00:24:46.421357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILR17swQkbZCM0wV5M55ttOEQwww024uLNii2YRDCmba) 2026-03-05 00:24:46.421368 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKP5kHdFXmAQkSoRJ4pAXBXtr3SIqZajcAsMRMcBCT20FczCHeSuxIpVAzXEOTKU3k/vhvbwmAUiy3ik7Cg5p7U=) 2026-03-05 00:24:46.421377 | orchestrator | 2026-03-05 00:24:46.421386 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:46.421395 | orchestrator | Thursday 05 March 2026 00:24:44 +0000 (0:00:00.977) 0:00:12.198 ******** 2026-03-05 00:24:46.421411 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIziP5x4yKsLzp9fIEZJWcoWqQGmdlIU8hXn4+DBxQiVyIJzDSfkUigb0khI4FAcRVv8IiTcyazzY5xc9kQofoWGD0oCSjuqED3IGs04AP1Bt/s/FRQ7VTduEfeVJUz2uBjBnWJsvkHxHBmeytJoDjwQxVsCE0u2A2GkaQMzzvsIqJUq40hO5worWC8OU9lGYpOCWfOjZpDe72ewBpEaYJdhNudkfo0mhAGnyZcfQKkh3NSvn2ya6q+OwupU/YxH8CyKBE9xcryBhgcYEnuUzGohIbmUFUkuCVTnva5W+vxqvAGBWNOFZtGjPeycThe39MXkEuDmfzcgiNcJ8vlUTy7pyygpHtIa+fuUFzMvc4fFlK+Wer/vO7vZ87liVGM51GUAX+lWKua7R6qYsFgPWtkkyLqpY2bA/j7YbPHv7b3aYcWPXOsuwqbKrPRxgat1i+uBFGAN/T6vm0N4L8R9NaJ5bgfAGDEnL4ynFIr1B+GhV9cAr1R+m2KOy7VnP4dRU=) 2026-03-05 00:24:56.882203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGHWgGPcmMC4h3WvSpmNHQ7AONHpDumItmhtq8fkL/tkNP00e8B2VHc3fGEN5EbnjJwxbHZoMuWLn3mQ5kIBdfE=) 2026-03-05 00:24:56.882334 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA1R0spRHY8t3omcER1rx841qS1/uExEIXkANHPMz4eb) 2026-03-05 00:24:56.882352 | orchestrator | 2026-03-05 00:24:56.882381 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:56.882395 | orchestrator | Thursday 05 March 2026 00:24:46 +0000 (0:00:01.966) 0:00:14.164 ******** 2026-03-05 00:24:56.882408 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDqo+rdHDQx+HHaldj0zSSDIZDYR4NN5oik7FPR5hIFoPiJF5aOKqiyD8fFFkawlNZEDnUIB8js3bRcvN5o8bDM6nfSY1FF1vQwL4GiED9BxAOgxnuxaBrGrkD4xAzBS1OKZzw4L26V9PXNYf574TxhkTo3jewU7IUNQLGrXX7XS6TAMQtorUHdoeWU4fB1Aw90lnXSf+1VNq7zxQPN2ZT2na94QhxLFJ3k3vDqXMB6o6b0jJX8o42PYAuz2fsVKaMiMI5SGC8jzf4IVyjpWF+4lmRuUe5qrnA03LMaMDKtOywIWS8loQ+eLg3owg1EFjw56lEkjK0HCdMgMcnUk5xaUjU/LeJ/wDI60ylYnxNAKxS9jdnbnj7x4BHCtinfANtNeaKgWEbRS0ND8EDHNDiCzWjdGcyNobO/4ESsu6AKwFNVXFsn6h0vg1epuTJBW7mLSLu6OCcksg1/L5aeSn/kh+45CH7QE1rXYWwLuHczRejdw+f5VdBsdNbSnc8w3s=) 2026-03-05 00:24:56.882422 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPLy9tKXfhUeZYHfTZ3ZhhTx6sIUrvsDSsl3IYBBKKxs) 2026-03-05 00:24:56.882456 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHnWq70LEe3ejGrSRwVwDlyl9Tb17n5pGvvwA6ZVkbtHRYF4F1Q4MLudBmI+lHgbSvSKCX5eA04PhxEGVKY2B4g=) 2026-03-05 00:24:56.882468 | orchestrator | 2026-03-05 00:24:56.882479 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-05 00:24:56.882491 | orchestrator | Thursday 05 March 2026 00:24:47 +0000 (0:00:00.955) 0:00:15.120 ******** 2026-03-05 00:24:56.882503 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-05 00:24:56.882514 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-05 00:24:56.882525 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-05 00:24:56.882536 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-05 00:24:56.882547 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-05 00:24:56.882559 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-05 00:24:56.882578 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-05 00:24:56.882596 | orchestrator | 2026-03-05 00:24:56.882614 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-05 00:24:56.882634 | orchestrator | Thursday 05 March 2026 00:24:52 +0000 (0:00:05.207) 0:00:20.327 ******** 2026-03-05 00:24:56.882654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-05 00:24:56.882677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-05 00:24:56.882697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-05 00:24:56.882716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-05 00:24:56.882759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-05 00:24:56.882771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-05 00:24:56.882781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-05 00:24:56.882792 | orchestrator | 2026-03-05 00:24:56.882802 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:56.882813 | orchestrator | Thursday 05 March 2026 00:24:52 +0000 (0:00:00.173) 0:00:20.500 ******** 2026-03-05 00:24:56.882824 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHxavwKWPa1GZN1A37vCgBkdhbtE724y120FmUoUTzU6K4028myRR7hTUSQEb2egUGvN7BekX8Yx68gPmFFLl8E=) 2026-03-05 00:24:56.882869 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCa8QeblgONtOR/8FirSE6W/GAZMrFSBB/cWXfK9dInpwjo0Qu4QwjsK404r1lp6JgbR0VwqMyYaBlhSQzeERWLsIwt8wB57m0YQYN5soC0vvPGPo4VyiWZbmFPBFM9DdsmkrWSqLBifbxtbEyGDqnsBY1+HPmQU0n9xaOVQcWNGwUpTf8eaasZDsjs8qSD4r7zVWURhzNvU9+EhgzFsXEmRWqNfF1MUeRPiRHbC6xBayRGI87qAcp5VGyQG1pBLsLgyHSqL3Vk4NsZW2nuv3eGRvPhG4e2QBHgdpzW0ECyxXrXHiDuvkxQgTMGg2RQAHHIHBiaLM3DNJAjSNA8cN5ZI1ECFRi7/CCzoIE4W8qUacLMpZQYJidw2PUwT4hcyDqXqzRf2Bd+g2xtn8jjZgvPAKTsq6qeJriLxI1S5o9/DVxnBdMtktBe2VP6ifM3ZQV26+PqcCEKcqMEry6mxrFbPz1qgmcl3vrgfe0v8o5b8kPHcIiHfzFQL5gfKYftzZE=) 2026-03-05 00:24:56.882894 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID+z5lpFx00VXCdCPNU9e6ohrZNkk0yg7Eg7TI/dIsZY) 2026-03-05 00:24:56.882906 | orchestrator | 2026-03-05 00:24:56.882917 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:56.882929 | orchestrator | Thursday 05 March 2026 00:24:53 +0000 (0:00:01.024) 0:00:21.525 ******** 2026-03-05 00:24:56.882941 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsvGxgcj6xDxfgWjXkUXMgknz+lSBGDej9QxmZffxvl2uRi9Qz2YH2BHkZaMHpNo0RJ4RkZTVBBqs1F7gLACAVoF+9wU6g02ylmJm8WrkhYsTBs63+x0tIEjYE/ANLFLUGim54IrzWMKpjBpQseKscC6aOatqzde2yQR3HV9h8eBgCsxv8tZKFLGkz/jlKlBzew7imw2srpzPWpz3N8jSbCe5f9JsxaAczkPICB+d4AB1/Ut0i1BV5sRg9i4oCbfrZpk96csf6VbZZaItiINm4I4l4v8s5yrzTAqdIN3nMEfJ6v7GfOcKEk/3OeINNG1R7oDin+yI5zgPgBCzD6yAclHhJ66aoHTcWo0/ND6CIiaummvDQ6KCLAturI3uq2mGumPHLczbW9AJi7dyZDvJUrsXb1B1RZb8Gvu34eJ8Z72SnWaAzdh3aFrcreq6MaN83vAXtpxZExKko7M9m+8DBIleNj6yKJ4+xIqFmjSOm/ajb0o3jbUj8aM+6Mn9Krhs=) 2026-03-05 00:24:56.882952 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOrHwDwKaekpNIMx31PlKABilrO8dllVAoVhHgOvgRwrzr+m7ZcdbC9WxnZ8shvLxOeuYRyPuGgLorX7sW4nviE=) 2026-03-05 00:24:56.882963 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOc/nVJgM8SPeB3DDsN3d9rf+Zc+M+Z0Jt7Wtt+N+goo) 2026-03-05 00:24:56.882974 | orchestrator | 2026-03-05 00:24:56.882985 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:56.882996 | orchestrator | Thursday 05 March 2026 00:24:54 +0000 (0:00:01.039) 0:00:22.565 ******** 2026-03-05 00:24:56.883007 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOy8Cy7rv7P9EBbnDITVYYeMKgcBwE2veRafTYXUdfLL) 2026-03-05 00:24:56.883018 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs+lNaaJ8x+idmY6ZFiq0KCYpEwY9eJasLY9joXQhP2fY2JUqX9ibVDv0K0H/SXzIDZy+aaAtUE670NsPmCg+AFK9LgqcwAwIN8Cz9NWWLwvhjR5db4dAahyjpLCMbGVRzxfbPdKybFWW0R8Eai+0Z3/qj3ahukdlqh2A8fvju7W6/EuK7N/GhPflJieck1d0bPqRNDsy3HC/wqSGGIsN8eYKEvFPJYIYCz2ni7+xQYlLshfsSjiE17CCygUZdhCiaBtBW9Lsc1hQS8F43NG0A8e/fx/B1X0nhEjU6yKq1vYCw5aQqxoSzyMvFSOrcTQbJL/J3gGlw+XpS0XlSlR8O2qR4076hJG6b+tiOz4BOANvX3zuNlhEduQRnTDFNOgZqhPzWhDMSP6wIXRq5NN4Vq+qcHdiY3W1LwA3f9u4TySzRKdKytkcUO15+aiF/fCRaEQIbSsb96SXXWPRL/PKvoXDP1YNGSxJhlAYXcgkF656tAKlXHSudUCUaWuLw+ws=) 2026-03-05 00:24:56.883029 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPdvWLHB+QjWi630Ogtim3HAmYd3AuGAoOvUwn49+y9KiKzsr3fSOu2CgDzr0pdcYx7uzvqvk4gx6/tmXqqd+NU=) 2026-03-05 00:24:56.883039 | orchestrator | 2026-03-05 00:24:56.883050 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:24:56.883061 | orchestrator | Thursday 05 March 2026 00:24:55 +0000 (0:00:01.055) 0:00:23.620 ******** 2026-03-05 00:24:56.883072 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAej9g6JPMSylCjSHDBaJycCXFyauR836n4N4BcCP4YNLuzVAoCa6quBnY24ZMc8mZKkcMPM/B2+gXpZjVaMRgg=) 2026-03-05 00:24:56.883083 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnG3f+Sy8v8ilSMwWYZc7UYwu9FAr/q6zze5vrp9xL86DAoDm1MHcR1B37df3SMpBctAg4CAtkQSSDqGwIW4AEu6OFOea9V2XofodGngaeUjoWz8xU07svT3+9lIJncLScGSPBsk7PrB+iQNGZ838Dyjczz+eG1+wFE6tZIcrxEYG0GM7jbILzhAElYOalsng6LrmABjdRUGhrOJSF3UOAkEtWnIjn/VMk0v0nD3Yz0DqR8zjyohJqJ9AXYckwNpEGDN4psid2mqvfVwCwb4oZ4+AdVnEyNH3rE07iDmUr8B9s2NnhGvTYO+7wHzFKo6ENyMRIviXnirn4/TlsrKLl0aVeNNLb2Dg5Z0GXB99mRT4i+Z2mJT4DEZRmEGIY6rksVLHI0jgLpDjrrmMeWk/oYSwbBkvQ74QjIWZGxFFMjznzVQM71eI6q/XpBnXYafgupHdr1D5rFCaWTbTDwGWLWA+rTroCsUhjuvVfLVL28Y9Eipo16rotLonPTKF2mMk=) 2026-03-05 00:24:56.883107 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJWN65UOhQG0q6dvFWyg+bFX62CHvtCmQzT+XXRshGTX) 2026-03-05 00:25:01.293289 | orchestrator | 2026-03-05 00:25:01.293392 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:25:01.293408 | orchestrator | Thursday 05 March 2026 00:24:56 +0000 (0:00:01.006) 0:00:24.627 ******** 2026-03-05 00:25:01.293423 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH1Ngdto8hjC1hxrekUkG2vxlLIzM5CQYWchYa3OmCG1o1jLvberJ1qxX+wuofQIQSeEGNbXN7Nel2E7r6ccbHHsG2wbMCGRRTwKSObuY2/LwoScx3WwSlUY4njKPJl13IQIhpiV6D5k4J4i16j4deoSfDmdxxq+5gX0Igw4BkWdcLpUAMdM/R4Bb5E7jkEeh3eHqptXbFkXGCk8LVU9cJmYIDdhmP6klZxGgElWHmcmlB6Z3RTC4BKwd7pptt59hVHnMxZCAssuNIJWSlWAVA2sGvvPsZrHduV1IUM3ZIZ1469QCiOnyYrvdXViPfLcQ/LrNEYh88M8aaTg4GEy3iT0r3JXfgOdL7im9+Vxc8yATMtVICKaw5uiBe9EcgDqxN4wH5RWMJY/HJW2eyxlj6PFeS8gwd5xUzHRmjl7/Bh/MiQeY0VyGKhrlxrgu7yASJKsjICq7m5W0s4ZrMIUIZcqu5haMe0qepwJnOiz3sxuUkYMgAOp957oO7xc1IYuc=) 2026-03-05 00:25:01.293439 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKP5kHdFXmAQkSoRJ4pAXBXtr3SIqZajcAsMRMcBCT20FczCHeSuxIpVAzXEOTKU3k/vhvbwmAUiy3ik7Cg5p7U=) 2026-03-05 00:25:01.293453 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILR17swQkbZCM0wV5M55ttOEQwww024uLNii2YRDCmba) 2026-03-05 00:25:01.293465 | orchestrator | 2026-03-05 00:25:01.293476 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:25:01.293487 | orchestrator | Thursday 05 March 2026 00:24:57 +0000 (0:00:01.035) 0:00:25.663 ******** 2026-03-05 00:25:01.293498 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA1R0spRHY8t3omcER1rx841qS1/uExEIXkANHPMz4eb) 2026-03-05 00:25:01.293511 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIziP5x4yKsLzp9fIEZJWcoWqQGmdlIU8hXn4+DBxQiVyIJzDSfkUigb0khI4FAcRVv8IiTcyazzY5xc9kQofoWGD0oCSjuqED3IGs04AP1Bt/s/FRQ7VTduEfeVJUz2uBjBnWJsvkHxHBmeytJoDjwQxVsCE0u2A2GkaQMzzvsIqJUq40hO5worWC8OU9lGYpOCWfOjZpDe72ewBpEaYJdhNudkfo0mhAGnyZcfQKkh3NSvn2ya6q+OwupU/YxH8CyKBE9xcryBhgcYEnuUzGohIbmUFUkuCVTnva5W+vxqvAGBWNOFZtGjPeycThe39MXkEuDmfzcgiNcJ8vlUTy7pyygpHtIa+fuUFzMvc4fFlK+Wer/vO7vZ87liVGM51GUAX+lWKua7R6qYsFgPWtkkyLqpY2bA/j7YbPHv7b3aYcWPXOsuwqbKrPRxgat1i+uBFGAN/T6vm0N4L8R9NaJ5bgfAGDEnL4ynFIr1B+GhV9cAr1R+m2KOy7VnP4dRU=) 2026-03-05 00:25:01.293522 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGHWgGPcmMC4h3WvSpmNHQ7AONHpDumItmhtq8fkL/tkNP00e8B2VHc3fGEN5EbnjJwxbHZoMuWLn3mQ5kIBdfE=) 2026-03-05 00:25:01.293533 | orchestrator | 2026-03-05 00:25:01.293544 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-05 00:25:01.293555 | orchestrator | Thursday 05 March 2026 00:24:58 +0000 (0:00:01.081) 0:00:26.744 ******** 2026-03-05 00:25:01.293586 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDqo+rdHDQx+HHaldj0zSSDIZDYR4NN5oik7FPR5hIFoPiJF5aOKqiyD8fFFkawlNZEDnUIB8js3bRcvN5o8bDM6nfSY1FF1vQwL4GiED9BxAOgxnuxaBrGrkD4xAzBS1OKZzw4L26V9PXNYf574TxhkTo3jewU7IUNQLGrXX7XS6TAMQtorUHdoeWU4fB1Aw90lnXSf+1VNq7zxQPN2ZT2na94QhxLFJ3k3vDqXMB6o6b0jJX8o42PYAuz2fsVKaMiMI5SGC8jzf4IVyjpWF+4lmRuUe5qrnA03LMaMDKtOywIWS8loQ+eLg3owg1EFjw56lEkjK0HCdMgMcnUk5xaUjU/LeJ/wDI60ylYnxNAKxS9jdnbnj7x4BHCtinfANtNeaKgWEbRS0ND8EDHNDiCzWjdGcyNobO/4ESsu6AKwFNVXFsn6h0vg1epuTJBW7mLSLu6OCcksg1/L5aeSn/kh+45CH7QE1rXYWwLuHczRejdw+f5VdBsdNbSnc8w3s=) 2026-03-05 00:25:01.293599 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPLy9tKXfhUeZYHfTZ3ZhhTx6sIUrvsDSsl3IYBBKKxs) 2026-03-05 00:25:01.293611 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHnWq70LEe3ejGrSRwVwDlyl9Tb17n5pGvvwA6ZVkbtHRYF4F1Q4MLudBmI+lHgbSvSKCX5eA04PhxEGVKY2B4g=) 2026-03-05 00:25:01.293622 | orchestrator | 2026-03-05 00:25:01.293633 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-05 00:25:01.293665 | orchestrator | Thursday 05 March 2026 00:25:00 +0000 (0:00:01.057) 0:00:27.802 ******** 2026-03-05 00:25:01.293678 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-05 00:25:01.293689 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-05 00:25:01.293700 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-05 00:25:01.293710 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-05 00:25:01.293758 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-05 00:25:01.293770 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-05 00:25:01.293781 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-05 00:25:01.293792 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:01.293804 | orchestrator | 2026-03-05 00:25:01.293832 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-05 00:25:01.293847 | orchestrator | Thursday 05 March 2026 00:25:00 +0000 (0:00:00.168) 0:00:27.971 ******** 2026-03-05 00:25:01.293861 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:01.293874 | orchestrator | 2026-03-05 00:25:01.293886 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-05 00:25:01.293899 | orchestrator | Thursday 05 March 2026 00:25:00 +0000 (0:00:00.079) 0:00:28.051 ******** 2026-03-05 00:25:01.293920 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:25:01.293934 | orchestrator | 2026-03-05 00:25:01.293947 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-05 00:25:01.293961 | orchestrator | Thursday 05 March 2026 00:25:00 +0000 (0:00:00.063) 0:00:28.115 ******** 2026-03-05 00:25:01.293974 | orchestrator | changed: [testbed-manager] 2026-03-05 00:25:01.293987 | orchestrator | 2026-03-05 00:25:01.294000 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:25:01.294013 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:25:01.294087 | orchestrator | 2026-03-05 00:25:01.294101 | orchestrator | 2026-03-05 00:25:01.294114 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:25:01.294127 | orchestrator | Thursday 05 March 2026 00:25:01 +0000 (0:00:00.726) 0:00:28.842 ******** 2026-03-05 00:25:01.294141 | orchestrator | =============================================================================== 2026-03-05 00:25:01.294154 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.90s 2026-03-05 00:25:01.294167 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2026-03-05 00:25:01.294180 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.97s 2026-03-05 00:25:01.294191 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.96s 2026-03-05 00:25:01.294202 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-05 00:25:01.294213 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-05 00:25:01.294223 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-05 00:25:01.294234 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-05 00:25:01.294245 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-05 00:25:01.294256 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-05 00:25:01.294267 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-05 00:25:01.294278 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-05 00:25:01.294289 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-05 00:25:01.294300 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-05 00:25:01.294310 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-05 00:25:01.294330 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-03-05 00:25:01.294341 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.73s 2026-03-05 00:25:01.294352 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-05 00:25:01.294363 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-03-05 00:25:01.294374 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-05 00:25:01.586697 | orchestrator | + osism apply squid 2026-03-05 00:25:13.616106 | orchestrator | 2026-03-05 00:25:13 | INFO  | Task 535e8b02-e76a-41b4-8402-f9e341857c39 (squid) was prepared for execution. 2026-03-05 00:25:13.616237 | orchestrator | 2026-03-05 00:25:13 | INFO  | It takes a moment until task 535e8b02-e76a-41b4-8402-f9e341857c39 (squid) has been started and output is visible here. 2026-03-05 00:27:09.595546 | orchestrator | 2026-03-05 00:27:09.595707 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-05 00:27:09.595727 | orchestrator | 2026-03-05 00:27:09.595739 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-05 00:27:09.595751 | orchestrator | Thursday 05 March 2026 00:25:17 +0000 (0:00:00.120) 0:00:00.120 ******** 2026-03-05 00:27:09.595762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:27:09.595775 | orchestrator | 2026-03-05 00:27:09.595786 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-05 00:27:09.595797 | orchestrator | Thursday 05 March 2026 00:25:17 +0000 (0:00:00.064) 0:00:00.184 ******** 2026-03-05 00:27:09.595808 | orchestrator | ok: [testbed-manager] 2026-03-05 00:27:09.595820 | orchestrator | 2026-03-05 00:27:09.595831 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-05 00:27:09.595842 | orchestrator | Thursday 05 March 2026 00:25:18 +0000 (0:00:01.114) 0:00:01.299 ******** 2026-03-05 00:27:09.595854 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-05 00:27:09.595865 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-05 00:27:09.595876 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-05 00:27:09.595887 | orchestrator | 2026-03-05 00:27:09.595898 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-05 00:27:09.595909 | orchestrator | Thursday 05 March 2026 00:25:19 +0000 (0:00:00.996) 0:00:02.295 ******** 2026-03-05 00:27:09.595920 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-05 00:27:09.595931 | orchestrator | 2026-03-05 00:27:09.595942 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-05 00:27:09.595953 | orchestrator | Thursday 05 March 2026 00:25:20 +0000 (0:00:00.906) 0:00:03.202 ******** 2026-03-05 00:27:09.595964 | orchestrator | ok: [testbed-manager] 2026-03-05 00:27:09.595975 | orchestrator | 2026-03-05 00:27:09.595986 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-05 00:27:09.595997 | orchestrator | Thursday 05 March 2026 00:25:20 +0000 (0:00:00.338) 0:00:03.540 ******** 2026-03-05 00:27:09.596008 | orchestrator | changed: [testbed-manager] 2026-03-05 00:27:09.596019 | orchestrator | 2026-03-05 00:27:09.596031 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-05 00:27:09.596042 | orchestrator | Thursday 05 March 2026 00:25:21 +0000 (0:00:00.806) 0:00:04.347 ******** 2026-03-05 00:27:09.596052 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-05 00:27:09.596082 | orchestrator | ok: [testbed-manager] 2026-03-05 00:27:09.596110 | orchestrator | 2026-03-05 00:27:09.596122 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-05 00:27:09.596135 | orchestrator | Thursday 05 March 2026 00:25:52 +0000 (0:00:30.975) 0:00:35.323 ******** 2026-03-05 00:27:09.596197 | orchestrator | changed: [testbed-manager] 2026-03-05 00:27:09.596221 | orchestrator | 2026-03-05 00:27:09.596240 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-05 00:27:09.596258 | orchestrator | Thursday 05 March 2026 00:26:08 +0000 (0:00:15.833) 0:00:51.156 ******** 2026-03-05 00:27:09.596276 | orchestrator | Pausing for 60 seconds 2026-03-05 00:27:09.596295 | orchestrator | changed: [testbed-manager] 2026-03-05 00:27:09.596315 | orchestrator | 2026-03-05 00:27:09.596336 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-05 00:27:09.596357 | orchestrator | Thursday 05 March 2026 00:27:08 +0000 (0:01:00.085) 0:01:51.242 ******** 2026-03-05 00:27:09.596377 | orchestrator | ok: [testbed-manager] 2026-03-05 00:27:09.596397 | orchestrator | 2026-03-05 00:27:09.596416 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-05 00:27:09.596436 | orchestrator | Thursday 05 March 2026 00:27:08 +0000 (0:00:00.055) 0:01:51.297 ******** 2026-03-05 00:27:09.596455 | orchestrator | changed: [testbed-manager] 2026-03-05 00:27:09.596476 | orchestrator | 2026-03-05 00:27:09.596496 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:27:09.596515 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:27:09.596531 | orchestrator | 2026-03-05 00:27:09.596542 | orchestrator | 2026-03-05 00:27:09.596553 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:27:09.596564 | orchestrator | Thursday 05 March 2026 00:27:09 +0000 (0:00:00.625) 0:01:51.923 ******** 2026-03-05 00:27:09.596575 | orchestrator | =============================================================================== 2026-03-05 00:27:09.596605 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-05 00:27:09.596617 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.98s 2026-03-05 00:27:09.596659 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.83s 2026-03-05 00:27:09.596672 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.11s 2026-03-05 00:27:09.596683 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.00s 2026-03-05 00:27:09.596694 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.91s 2026-03-05 00:27:09.596705 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.81s 2026-03-05 00:27:09.596716 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-03-05 00:27:09.596727 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-03-05 00:27:09.596737 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.06s 2026-03-05 00:27:09.596748 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-05 00:27:09.936360 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-05 00:27:09.936839 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-05 00:27:09.994966 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-05 00:27:09.995090 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-05 00:27:10.000786 | orchestrator | + set -e 2026-03-05 00:27:10.000872 | orchestrator | + NAMESPACE=kolla/release 2026-03-05 00:27:10.000888 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-05 00:27:10.006226 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-05 00:27:10.076738 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-05 00:27:10.077267 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-05 00:27:22.151903 | orchestrator | 2026-03-05 00:27:22 | INFO  | Task 082e1fe6-1eac-4e6b-8778-9c035d98f16a (operator) was prepared for execution. 2026-03-05 00:27:22.151984 | orchestrator | 2026-03-05 00:27:22 | INFO  | It takes a moment until task 082e1fe6-1eac-4e6b-8778-9c035d98f16a (operator) has been started and output is visible here. 2026-03-05 00:27:39.711510 | orchestrator | 2026-03-05 00:27:39.711686 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-05 00:27:39.711705 | orchestrator | 2026-03-05 00:27:39.711718 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 00:27:39.711730 | orchestrator | Thursday 05 March 2026 00:27:26 +0000 (0:00:00.153) 0:00:00.153 ******** 2026-03-05 00:27:39.711756 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:27:39.711769 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:27:39.711780 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:27:39.711791 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:27:39.711802 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:27:39.711812 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:27:39.711824 | orchestrator | 2026-03-05 00:27:39.712311 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-05 00:27:39.712328 | orchestrator | Thursday 05 March 2026 00:27:30 +0000 (0:00:04.342) 0:00:04.495 ******** 2026-03-05 00:27:39.712342 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:27:39.712356 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:27:39.712368 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:27:39.712381 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:27:39.712412 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:27:39.712423 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:27:39.712434 | orchestrator | 2026-03-05 00:27:39.712446 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-05 00:27:39.712457 | orchestrator | 2026-03-05 00:27:39.712468 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-05 00:27:39.712479 | orchestrator | Thursday 05 March 2026 00:27:31 +0000 (0:00:00.944) 0:00:05.440 ******** 2026-03-05 00:27:39.712490 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:27:39.712501 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:27:39.712512 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:27:39.712523 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:27:39.712534 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:27:39.712544 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:27:39.712556 | orchestrator | 2026-03-05 00:27:39.712567 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-05 00:27:39.712578 | orchestrator | Thursday 05 March 2026 00:27:31 +0000 (0:00:00.175) 0:00:05.616 ******** 2026-03-05 00:27:39.712589 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:27:39.712672 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:27:39.712687 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:27:39.712697 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:27:39.712707 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:27:39.712716 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:27:39.712726 | orchestrator | 2026-03-05 00:27:39.712735 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-05 00:27:39.712745 | orchestrator | Thursday 05 March 2026 00:27:31 +0000 (0:00:00.167) 0:00:05.783 ******** 2026-03-05 00:27:39.712755 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:27:39.712766 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:27:39.712776 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:27:39.712786 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:27:39.712795 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:27:39.712805 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:27:39.712815 | orchestrator | 2026-03-05 00:27:39.712824 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-05 00:27:39.712834 | orchestrator | Thursday 05 March 2026 00:27:32 +0000 (0:00:00.646) 0:00:06.429 ******** 2026-03-05 00:27:39.712844 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:27:39.712853 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:27:39.712863 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:27:39.712873 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:27:39.712882 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:27:39.712892 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:27:39.712901 | orchestrator | 2026-03-05 00:27:39.712911 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-05 00:27:39.712943 | orchestrator | Thursday 05 March 2026 00:27:33 +0000 (0:00:00.809) 0:00:07.239 ******** 2026-03-05 00:27:39.712953 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-05 00:27:39.712963 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-05 00:27:39.712972 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-05 00:27:39.712982 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-05 00:27:39.712992 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-05 00:27:39.713001 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-05 00:27:39.713011 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-05 00:27:39.713020 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-05 00:27:39.713030 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-05 00:27:39.713039 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-05 00:27:39.713049 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-05 00:27:39.713058 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-05 00:27:39.713068 | orchestrator | 2026-03-05 00:27:39.713077 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-05 00:27:39.713087 | orchestrator | Thursday 05 March 2026 00:27:34 +0000 (0:00:01.346) 0:00:08.586 ******** 2026-03-05 00:27:39.713097 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:27:39.713106 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:27:39.713116 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:27:39.713125 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:27:39.713135 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:27:39.713144 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:27:39.713154 | orchestrator | 2026-03-05 00:27:39.713164 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-05 00:27:39.713174 | orchestrator | Thursday 05 March 2026 00:27:36 +0000 (0:00:01.295) 0:00:09.882 ******** 2026-03-05 00:27:39.713184 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-05 00:27:39.713193 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-05 00:27:39.713203 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-05 00:27:39.713212 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:27:39.713241 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:27:39.713251 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:27:39.713261 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:27:39.713271 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:27:39.713280 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-05 00:27:39.713290 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-05 00:27:39.713299 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-05 00:27:39.713309 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-05 00:27:39.713319 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-05 00:27:39.713328 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-05 00:27:39.713337 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-05 00:27:39.713347 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:27:39.713357 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:27:39.713367 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:27:39.713376 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:27:39.713386 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:27:39.713403 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-05 00:27:39.713412 | orchestrator | 2026-03-05 00:27:39.713422 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-05 00:27:39.713433 | orchestrator | Thursday 05 March 2026 00:27:37 +0000 (0:00:01.401) 0:00:11.283 ******** 2026-03-05 00:27:39.713443 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:27:39.713452 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:27:39.713473 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:27:39.713483 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:27:39.713493 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:27:39.713502 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:27:39.713512 | orchestrator | 2026-03-05 00:27:39.713558 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-05 00:27:39.713570 | orchestrator | Thursday 05 March 2026 00:27:37 +0000 (0:00:00.175) 0:00:11.459 ******** 2026-03-05 00:27:39.713580 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:27:39.713637 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:27:39.713648 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:27:39.713658 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:27:39.713667 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:27:39.713677 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:27:39.713687 | orchestrator | 2026-03-05 00:27:39.713696 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-05 00:27:39.713706 | orchestrator | Thursday 05 March 2026 00:27:37 +0000 (0:00:00.171) 0:00:11.630 ******** 2026-03-05 00:27:39.713716 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:27:39.713726 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:27:39.713735 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:27:39.713745 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:27:39.713754 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:27:39.713764 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:27:39.713773 | orchestrator | 2026-03-05 00:27:39.713783 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-05 00:27:39.713793 | orchestrator | Thursday 05 March 2026 00:27:38 +0000 (0:00:00.596) 0:00:12.226 ******** 2026-03-05 00:27:39.713802 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:27:39.713812 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:27:39.713821 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:27:39.713831 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:27:39.713850 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:27:39.713860 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:27:39.713870 | orchestrator | 2026-03-05 00:27:39.713880 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-05 00:27:39.713889 | orchestrator | Thursday 05 March 2026 00:27:38 +0000 (0:00:00.168) 0:00:12.395 ******** 2026-03-05 00:27:39.713899 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-05 00:27:39.713909 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-05 00:27:39.713918 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:27:39.713928 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 00:27:39.713937 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:27:39.713946 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:27:39.713956 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 00:27:39.713966 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:27:39.713975 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 00:27:39.713984 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:27:39.713994 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 00:27:39.714004 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:27:39.714075 | orchestrator | 2026-03-05 00:27:39.714089 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-05 00:27:39.714099 | orchestrator | Thursday 05 March 2026 00:27:39 +0000 (0:00:00.795) 0:00:13.191 ******** 2026-03-05 00:27:39.714116 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:27:39.714126 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:27:39.714135 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:27:39.714145 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:27:39.714154 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:27:39.714164 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:27:39.714173 | orchestrator | 2026-03-05 00:27:39.714183 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-05 00:27:39.714193 | orchestrator | Thursday 05 March 2026 00:27:39 +0000 (0:00:00.156) 0:00:13.347 ******** 2026-03-05 00:27:39.714202 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:27:39.714212 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:27:39.714222 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:27:39.714231 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:27:39.714250 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:27:41.068268 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:27:41.068367 | orchestrator | 2026-03-05 00:27:41.068379 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-05 00:27:41.068387 | orchestrator | Thursday 05 March 2026 00:27:39 +0000 (0:00:00.141) 0:00:13.488 ******** 2026-03-05 00:27:41.068394 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:27:41.068402 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:27:41.068409 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:27:41.068416 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:27:41.068422 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:27:41.068429 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:27:41.068436 | orchestrator | 2026-03-05 00:27:41.068443 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-05 00:27:41.068450 | orchestrator | Thursday 05 March 2026 00:27:39 +0000 (0:00:00.140) 0:00:13.629 ******** 2026-03-05 00:27:41.068456 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:27:41.068463 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:27:41.068470 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:27:41.068491 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:27:41.068498 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:27:41.068504 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:27:41.068510 | orchestrator | 2026-03-05 00:27:41.068518 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-05 00:27:41.068522 | orchestrator | Thursday 05 March 2026 00:27:40 +0000 (0:00:00.720) 0:00:14.349 ******** 2026-03-05 00:27:41.068525 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:27:41.068529 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:27:41.068533 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:27:41.068537 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:27:41.068541 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:27:41.068544 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:27:41.068548 | orchestrator | 2026-03-05 00:27:41.068552 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:27:41.068557 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:27:41.068563 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:27:41.068567 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:27:41.068571 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:27:41.068575 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:27:41.068594 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 00:27:41.068631 | orchestrator | 2026-03-05 00:27:41.068636 | orchestrator | 2026-03-05 00:27:41.068640 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:27:41.068644 | orchestrator | Thursday 05 March 2026 00:27:40 +0000 (0:00:00.246) 0:00:14.596 ******** 2026-03-05 00:27:41.068647 | orchestrator | =============================================================================== 2026-03-05 00:27:41.068651 | orchestrator | Gathering Facts --------------------------------------------------------- 4.34s 2026-03-05 00:27:41.068655 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.40s 2026-03-05 00:27:41.068660 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.35s 2026-03-05 00:27:41.068663 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.30s 2026-03-05 00:27:41.068667 | orchestrator | Do not require tty for all users ---------------------------------------- 0.94s 2026-03-05 00:27:41.068671 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2026-03-05 00:27:41.068675 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.80s 2026-03-05 00:27:41.068679 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2026-03-05 00:27:41.068682 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2026-03-05 00:27:41.068686 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2026-03-05 00:27:41.068690 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-03-05 00:27:41.068694 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-03-05 00:27:41.068697 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-03-05 00:27:41.068701 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-03-05 00:27:41.068705 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-03-05 00:27:41.068709 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-03-05 00:27:41.068712 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-03-05 00:27:41.068716 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-03-05 00:27:41.068720 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-03-05 00:27:41.383779 | orchestrator | + osism apply --environment custom facts 2026-03-05 00:27:43.331820 | orchestrator | 2026-03-05 00:27:43 | INFO  | Trying to run play facts in environment custom 2026-03-05 00:27:53.578002 | orchestrator | 2026-03-05 00:27:53 | INFO  | Task 7932a5d9-91c8-4a8b-95e1-a530169df3fa (facts) was prepared for execution. 2026-03-05 00:27:53.578164 | orchestrator | 2026-03-05 00:27:53 | INFO  | It takes a moment until task 7932a5d9-91c8-4a8b-95e1-a530169df3fa (facts) has been started and output is visible here. 2026-03-05 00:28:38.934789 | orchestrator | 2026-03-05 00:28:38.934891 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-05 00:28:38.934903 | orchestrator | 2026-03-05 00:28:38.934911 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-05 00:28:38.934918 | orchestrator | Thursday 05 March 2026 00:27:57 +0000 (0:00:00.087) 0:00:00.087 ******** 2026-03-05 00:28:38.934924 | orchestrator | ok: [testbed-manager] 2026-03-05 00:28:38.934931 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:28:38.934938 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:28:38.934943 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:28:38.934951 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:28:38.934958 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:28:38.934965 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:28:38.934990 | orchestrator | 2026-03-05 00:28:38.934998 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-05 00:28:38.935005 | orchestrator | Thursday 05 March 2026 00:27:59 +0000 (0:00:01.403) 0:00:01.491 ******** 2026-03-05 00:28:38.935012 | orchestrator | ok: [testbed-manager] 2026-03-05 00:28:38.935018 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:28:38.935024 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:28:38.935030 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:28:38.935037 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:28:38.935043 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:28:38.935049 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:28:38.935056 | orchestrator | 2026-03-05 00:28:38.935064 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-05 00:28:38.935070 | orchestrator | 2026-03-05 00:28:38.935078 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-05 00:28:38.935084 | orchestrator | Thursday 05 March 2026 00:28:00 +0000 (0:00:01.410) 0:00:02.902 ******** 2026-03-05 00:28:38.935091 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:38.935097 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:38.935103 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:38.935109 | orchestrator | 2026-03-05 00:28:38.935116 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-05 00:28:38.935123 | orchestrator | Thursday 05 March 2026 00:28:00 +0000 (0:00:00.143) 0:00:03.045 ******** 2026-03-05 00:28:38.935130 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:38.935137 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:38.935143 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:38.935149 | orchestrator | 2026-03-05 00:28:38.935156 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-05 00:28:38.935163 | orchestrator | Thursday 05 March 2026 00:28:01 +0000 (0:00:00.214) 0:00:03.259 ******** 2026-03-05 00:28:38.935169 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:38.935176 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:38.935182 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:38.935189 | orchestrator | 2026-03-05 00:28:38.935195 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-05 00:28:38.935202 | orchestrator | Thursday 05 March 2026 00:28:01 +0000 (0:00:00.234) 0:00:03.493 ******** 2026-03-05 00:28:38.935211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:28:38.935219 | orchestrator | 2026-03-05 00:28:38.935225 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-05 00:28:38.935232 | orchestrator | Thursday 05 March 2026 00:28:01 +0000 (0:00:00.148) 0:00:03.642 ******** 2026-03-05 00:28:38.935239 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:38.935245 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:38.935251 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:38.935258 | orchestrator | 2026-03-05 00:28:38.935264 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-05 00:28:38.935270 | orchestrator | Thursday 05 March 2026 00:28:01 +0000 (0:00:00.453) 0:00:04.095 ******** 2026-03-05 00:28:38.935275 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:28:38.935282 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:28:38.935288 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:28:38.935294 | orchestrator | 2026-03-05 00:28:38.935300 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-05 00:28:38.935307 | orchestrator | Thursday 05 March 2026 00:28:02 +0000 (0:00:00.164) 0:00:04.259 ******** 2026-03-05 00:28:38.935313 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:28:38.935319 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:28:38.935325 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:28:38.935331 | orchestrator | 2026-03-05 00:28:38.935337 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-05 00:28:38.935353 | orchestrator | Thursday 05 March 2026 00:28:03 +0000 (0:00:01.180) 0:00:05.440 ******** 2026-03-05 00:28:38.935360 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:38.935366 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:38.935372 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:38.935379 | orchestrator | 2026-03-05 00:28:38.935385 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-05 00:28:38.935434 | orchestrator | Thursday 05 March 2026 00:28:03 +0000 (0:00:00.537) 0:00:05.978 ******** 2026-03-05 00:28:38.935443 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:28:38.935450 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:28:38.935457 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:28:38.935463 | orchestrator | 2026-03-05 00:28:38.935472 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-05 00:28:38.935480 | orchestrator | Thursday 05 March 2026 00:28:04 +0000 (0:00:01.163) 0:00:07.141 ******** 2026-03-05 00:28:38.935487 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:28:38.935492 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:28:38.935522 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:28:38.935531 | orchestrator | 2026-03-05 00:28:38.935537 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-05 00:28:38.935544 | orchestrator | Thursday 05 March 2026 00:28:21 +0000 (0:00:16.440) 0:00:23.581 ******** 2026-03-05 00:28:38.935550 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:28:38.935556 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:28:38.935562 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:28:38.935568 | orchestrator | 2026-03-05 00:28:38.935574 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-05 00:28:38.935597 | orchestrator | Thursday 05 March 2026 00:28:21 +0000 (0:00:00.097) 0:00:23.679 ******** 2026-03-05 00:28:38.935605 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:28:38.935612 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:28:38.935619 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:28:38.935626 | orchestrator | 2026-03-05 00:28:38.935633 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-05 00:28:38.935643 | orchestrator | Thursday 05 March 2026 00:28:29 +0000 (0:00:07.956) 0:00:31.635 ******** 2026-03-05 00:28:38.935652 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:38.935659 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:38.935666 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:38.935672 | orchestrator | 2026-03-05 00:28:38.935678 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-05 00:28:38.935684 | orchestrator | Thursday 05 March 2026 00:28:29 +0000 (0:00:00.469) 0:00:32.105 ******** 2026-03-05 00:28:38.935690 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-05 00:28:38.935696 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-05 00:28:38.935703 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-05 00:28:38.935709 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-05 00:28:38.935715 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-05 00:28:38.935721 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-05 00:28:38.935727 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-05 00:28:38.935733 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-05 00:28:38.935739 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-05 00:28:38.935746 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-05 00:28:38.935752 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-05 00:28:38.935757 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-05 00:28:38.935764 | orchestrator | 2026-03-05 00:28:38.935771 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-05 00:28:38.935785 | orchestrator | Thursday 05 March 2026 00:28:33 +0000 (0:00:03.702) 0:00:35.807 ******** 2026-03-05 00:28:38.935790 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:38.935796 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:38.935801 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:38.935806 | orchestrator | 2026-03-05 00:28:38.935812 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:28:38.935818 | orchestrator | 2026-03-05 00:28:38.935823 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:28:38.935830 | orchestrator | Thursday 05 March 2026 00:28:35 +0000 (0:00:01.364) 0:00:37.172 ******** 2026-03-05 00:28:38.935836 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:28:38.935842 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:28:38.935848 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:28:38.935854 | orchestrator | ok: [testbed-manager] 2026-03-05 00:28:38.935861 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:28:38.935867 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:28:38.935873 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:28:38.935879 | orchestrator | 2026-03-05 00:28:38.935886 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:28:38.935893 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:28:38.935901 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:28:38.935909 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:28:38.935916 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:28:38.935922 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:28:38.935929 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:28:38.935935 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:28:38.935942 | orchestrator | 2026-03-05 00:28:38.935948 | orchestrator | 2026-03-05 00:28:38.935955 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:28:38.935961 | orchestrator | Thursday 05 March 2026 00:28:38 +0000 (0:00:03.880) 0:00:41.052 ******** 2026-03-05 00:28:38.935967 | orchestrator | =============================================================================== 2026-03-05 00:28:38.935973 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.44s 2026-03-05 00:28:38.935979 | orchestrator | Install required packages (Debian) -------------------------------------- 7.96s 2026-03-05 00:28:38.935985 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.88s 2026-03-05 00:28:38.935991 | orchestrator | Copy fact files --------------------------------------------------------- 3.70s 2026-03-05 00:28:38.935997 | orchestrator | Copy fact file ---------------------------------------------------------- 1.41s 2026-03-05 00:28:38.936003 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2026-03-05 00:28:38.936016 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.36s 2026-03-05 00:28:39.174651 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.18s 2026-03-05 00:28:39.174752 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.16s 2026-03-05 00:28:39.174787 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.54s 2026-03-05 00:28:39.174800 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-03-05 00:28:39.174834 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-03-05 00:28:39.174845 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-03-05 00:28:39.174856 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-03-05 00:28:39.174867 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.16s 2026-03-05 00:28:39.174878 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-03-05 00:28:39.174889 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2026-03-05 00:28:39.174900 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-05 00:28:39.491562 | orchestrator | + osism apply bootstrap 2026-03-05 00:28:51.516242 | orchestrator | 2026-03-05 00:28:51 | INFO  | Task eb122a3e-a411-4a7f-b9d0-2c880b00c3ae (bootstrap) was prepared for execution. 2026-03-05 00:28:51.516331 | orchestrator | 2026-03-05 00:28:51 | INFO  | It takes a moment until task eb122a3e-a411-4a7f-b9d0-2c880b00c3ae (bootstrap) has been started and output is visible here. 2026-03-05 00:29:08.074757 | orchestrator | 2026-03-05 00:29:08.074828 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-05 00:29:08.074837 | orchestrator | 2026-03-05 00:29:08.074842 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-05 00:29:08.074848 | orchestrator | Thursday 05 March 2026 00:28:55 +0000 (0:00:00.151) 0:00:00.151 ******** 2026-03-05 00:29:08.074852 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:08.074858 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:08.074862 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:08.074867 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:08.074871 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:08.074875 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:08.074880 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:08.074884 | orchestrator | 2026-03-05 00:29:08.074889 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:29:08.074893 | orchestrator | 2026-03-05 00:29:08.074898 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:29:08.074902 | orchestrator | Thursday 05 March 2026 00:28:56 +0000 (0:00:00.271) 0:00:00.422 ******** 2026-03-05 00:29:08.074907 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:08.074911 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:08.074915 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:08.074920 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:08.074924 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:08.074928 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:08.074932 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:08.074937 | orchestrator | 2026-03-05 00:29:08.074941 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-05 00:29:08.074945 | orchestrator | 2026-03-05 00:29:08.074950 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:29:08.074954 | orchestrator | Thursday 05 March 2026 00:28:59 +0000 (0:00:03.859) 0:00:04.282 ******** 2026-03-05 00:29:08.074959 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-05 00:29:08.074977 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-05 00:29:08.074981 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-05 00:29:08.074986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-05 00:29:08.074996 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-05 00:29:08.075001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 00:29:08.075005 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-05 00:29:08.075009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 00:29:08.075014 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-05 00:29:08.075030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 00:29:08.075034 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-05 00:29:08.075039 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-05 00:29:08.075043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-05 00:29:08.075047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 00:29:08.075052 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-05 00:29:08.075056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-05 00:29:08.075061 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:08.075065 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-05 00:29:08.075069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 00:29:08.075074 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-05 00:29:08.075078 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-05 00:29:08.075082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-05 00:29:08.075086 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-05 00:29:08.075091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 00:29:08.075095 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:08.075099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-05 00:29:08.075103 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-05 00:29:08.075108 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-05 00:29:08.075112 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-05 00:29:08.075116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-05 00:29:08.075121 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-05 00:29:08.075125 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-05 00:29:08.075130 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-05 00:29:08.075134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 00:29:08.075139 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-05 00:29:08.075143 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-05 00:29:08.075147 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-05 00:29:08.075152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-05 00:29:08.075156 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-05 00:29:08.075160 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-05 00:29:08.075164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 00:29:08.075169 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-05 00:29:08.075173 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-05 00:29:08.075177 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-05 00:29:08.075181 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:08.075186 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-05 00:29:08.075199 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-05 00:29:08.075203 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 00:29:08.075208 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-05 00:29:08.075218 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:08.075222 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:08.075226 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:08.075231 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-05 00:29:08.075235 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-05 00:29:08.075239 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-05 00:29:08.075247 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:08.075251 | orchestrator | 2026-03-05 00:29:08.075256 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-05 00:29:08.075260 | orchestrator | 2026-03-05 00:29:08.075265 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-05 00:29:08.075269 | orchestrator | Thursday 05 March 2026 00:29:00 +0000 (0:00:00.592) 0:00:04.875 ******** 2026-03-05 00:29:08.075273 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:08.075278 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:08.075282 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:08.075286 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:08.075290 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:08.075295 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:08.075299 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:08.075303 | orchestrator | 2026-03-05 00:29:08.075308 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-05 00:29:08.075312 | orchestrator | Thursday 05 March 2026 00:29:01 +0000 (0:00:01.344) 0:00:06.219 ******** 2026-03-05 00:29:08.075316 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:08.075321 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:08.075325 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:08.075330 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:08.075335 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:08.075340 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:08.075346 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:08.075351 | orchestrator | 2026-03-05 00:29:08.075356 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-05 00:29:08.075361 | orchestrator | Thursday 05 March 2026 00:29:03 +0000 (0:00:01.249) 0:00:07.469 ******** 2026-03-05 00:29:08.075367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:29:08.075374 | orchestrator | 2026-03-05 00:29:08.075380 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-05 00:29:08.075385 | orchestrator | Thursday 05 March 2026 00:29:03 +0000 (0:00:00.317) 0:00:07.786 ******** 2026-03-05 00:29:08.075390 | orchestrator | changed: [testbed-manager] 2026-03-05 00:29:08.075395 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:08.075400 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:08.075405 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:08.075411 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:08.075416 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:08.075421 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:08.075426 | orchestrator | 2026-03-05 00:29:08.075431 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-05 00:29:08.075436 | orchestrator | Thursday 05 March 2026 00:29:05 +0000 (0:00:02.063) 0:00:09.849 ******** 2026-03-05 00:29:08.075441 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:08.075448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:29:08.075454 | orchestrator | 2026-03-05 00:29:08.075460 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-05 00:29:08.075465 | orchestrator | Thursday 05 March 2026 00:29:05 +0000 (0:00:00.279) 0:00:10.129 ******** 2026-03-05 00:29:08.075470 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:08.075476 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:08.075494 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:08.075500 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:08.075505 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:08.075510 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:08.075515 | orchestrator | 2026-03-05 00:29:08.075523 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-05 00:29:08.075530 | orchestrator | Thursday 05 March 2026 00:29:06 +0000 (0:00:01.044) 0:00:11.174 ******** 2026-03-05 00:29:08.075535 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:08.075541 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:08.075546 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:08.075551 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:08.075556 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:08.075561 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:08.075566 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:08.075571 | orchestrator | 2026-03-05 00:29:08.075576 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-05 00:29:08.075581 | orchestrator | Thursday 05 March 2026 00:29:07 +0000 (0:00:00.611) 0:00:11.786 ******** 2026-03-05 00:29:08.075587 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:08.075592 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:08.075597 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:08.075602 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:08.075607 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:08.075612 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:08.075618 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:08.075623 | orchestrator | 2026-03-05 00:29:08.075628 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-05 00:29:08.075635 | orchestrator | Thursday 05 March 2026 00:29:07 +0000 (0:00:00.480) 0:00:12.267 ******** 2026-03-05 00:29:08.075640 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:08.075645 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:08.075653 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:20.885539 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:20.885628 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:20.885638 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:20.885646 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:20.885654 | orchestrator | 2026-03-05 00:29:20.885663 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-05 00:29:20.885672 | orchestrator | Thursday 05 March 2026 00:29:08 +0000 (0:00:00.245) 0:00:12.512 ******** 2026-03-05 00:29:20.885681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:29:20.885702 | orchestrator | 2026-03-05 00:29:20.885710 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-05 00:29:20.885719 | orchestrator | Thursday 05 March 2026 00:29:08 +0000 (0:00:00.377) 0:00:12.889 ******** 2026-03-05 00:29:20.885727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:29:20.885734 | orchestrator | 2026-03-05 00:29:20.885741 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-05 00:29:20.885749 | orchestrator | Thursday 05 March 2026 00:29:08 +0000 (0:00:00.356) 0:00:13.246 ******** 2026-03-05 00:29:20.885756 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.885764 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.885771 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:20.885778 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.885785 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.885793 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:20.885800 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:20.885807 | orchestrator | 2026-03-05 00:29:20.885814 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-05 00:29:20.885821 | orchestrator | Thursday 05 March 2026 00:29:10 +0000 (0:00:01.521) 0:00:14.767 ******** 2026-03-05 00:29:20.885849 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:20.885856 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:20.885863 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:20.885870 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:20.885878 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:20.885885 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:20.885892 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:20.885899 | orchestrator | 2026-03-05 00:29:20.885906 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-05 00:29:20.885913 | orchestrator | Thursday 05 March 2026 00:29:10 +0000 (0:00:00.233) 0:00:15.000 ******** 2026-03-05 00:29:20.885919 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.885927 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.885934 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.885941 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.885948 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:20.885954 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:20.885961 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:20.885968 | orchestrator | 2026-03-05 00:29:20.885975 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-05 00:29:20.885982 | orchestrator | Thursday 05 March 2026 00:29:11 +0000 (0:00:00.612) 0:00:15.613 ******** 2026-03-05 00:29:20.885989 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:20.885997 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:20.886004 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:20.886011 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:20.886060 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:20.886068 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:20.886075 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:20.886083 | orchestrator | 2026-03-05 00:29:20.886091 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-05 00:29:20.886099 | orchestrator | Thursday 05 March 2026 00:29:11 +0000 (0:00:00.324) 0:00:15.938 ******** 2026-03-05 00:29:20.886107 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886114 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:20.886121 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:20.886128 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:20.886135 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:20.886143 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:20.886150 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:20.886157 | orchestrator | 2026-03-05 00:29:20.886172 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-05 00:29:20.886180 | orchestrator | Thursday 05 March 2026 00:29:12 +0000 (0:00:00.516) 0:00:16.454 ******** 2026-03-05 00:29:20.886187 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886195 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:20.886202 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:20.886209 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:20.886217 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:20.886224 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:20.886231 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:20.886239 | orchestrator | 2026-03-05 00:29:20.886246 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-05 00:29:20.886254 | orchestrator | Thursday 05 March 2026 00:29:13 +0000 (0:00:01.253) 0:00:17.708 ******** 2026-03-05 00:29:20.886261 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:20.886269 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.886276 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886284 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.886291 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.886298 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:20.886305 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:20.886313 | orchestrator | 2026-03-05 00:29:20.886320 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-05 00:29:20.886334 | orchestrator | Thursday 05 March 2026 00:29:14 +0000 (0:00:01.151) 0:00:18.860 ******** 2026-03-05 00:29:20.886357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:29:20.886366 | orchestrator | 2026-03-05 00:29:20.886374 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-05 00:29:20.886382 | orchestrator | Thursday 05 March 2026 00:29:14 +0000 (0:00:00.310) 0:00:19.170 ******** 2026-03-05 00:29:20.886389 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:20.886397 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:29:20.886405 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:20.886412 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:29:20.886419 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:29:20.886427 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:20.886434 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:20.886442 | orchestrator | 2026-03-05 00:29:20.886450 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-05 00:29:20.886457 | orchestrator | Thursday 05 March 2026 00:29:16 +0000 (0:00:01.316) 0:00:20.487 ******** 2026-03-05 00:29:20.886464 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886484 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.886491 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.886498 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.886504 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:20.886511 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:20.886518 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:20.886524 | orchestrator | 2026-03-05 00:29:20.886531 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-05 00:29:20.886538 | orchestrator | Thursday 05 March 2026 00:29:16 +0000 (0:00:00.221) 0:00:20.709 ******** 2026-03-05 00:29:20.886545 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886551 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.886558 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.886564 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.886571 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:20.886577 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:20.886584 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:20.886590 | orchestrator | 2026-03-05 00:29:20.886597 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-05 00:29:20.886604 | orchestrator | Thursday 05 March 2026 00:29:16 +0000 (0:00:00.238) 0:00:20.947 ******** 2026-03-05 00:29:20.886610 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886617 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.886623 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.886630 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.886637 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:20.886643 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:20.886650 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:20.886657 | orchestrator | 2026-03-05 00:29:20.886663 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-05 00:29:20.886670 | orchestrator | Thursday 05 March 2026 00:29:16 +0000 (0:00:00.254) 0:00:21.201 ******** 2026-03-05 00:29:20.886678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:29:20.886686 | orchestrator | 2026-03-05 00:29:20.886692 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-05 00:29:20.886699 | orchestrator | Thursday 05 March 2026 00:29:17 +0000 (0:00:00.277) 0:00:21.478 ******** 2026-03-05 00:29:20.886705 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886712 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.886725 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.886732 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:20.886738 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:20.886745 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.886751 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:20.886758 | orchestrator | 2026-03-05 00:29:20.886765 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-05 00:29:20.886772 | orchestrator | Thursday 05 March 2026 00:29:17 +0000 (0:00:00.639) 0:00:22.118 ******** 2026-03-05 00:29:20.886779 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:29:20.886785 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:29:20.886792 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:29:20.886799 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:29:20.886805 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:29:20.886812 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:29:20.886819 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:29:20.886825 | orchestrator | 2026-03-05 00:29:20.886832 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-05 00:29:20.886839 | orchestrator | Thursday 05 March 2026 00:29:17 +0000 (0:00:00.228) 0:00:22.347 ******** 2026-03-05 00:29:20.886846 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886853 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.886860 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.886867 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.886874 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:29:20.886881 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:29:20.886888 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:29:20.886894 | orchestrator | 2026-03-05 00:29:20.886901 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-05 00:29:20.886908 | orchestrator | Thursday 05 March 2026 00:29:19 +0000 (0:00:01.129) 0:00:23.477 ******** 2026-03-05 00:29:20.886914 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886921 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.886928 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.886935 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:29:20.886947 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:29:20.886954 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:29:20.886961 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.886968 | orchestrator | 2026-03-05 00:29:20.886974 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-05 00:29:20.886981 | orchestrator | Thursday 05 March 2026 00:29:19 +0000 (0:00:00.601) 0:00:24.078 ******** 2026-03-05 00:29:20.886988 | orchestrator | ok: [testbed-manager] 2026-03-05 00:29:20.886995 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:29:20.887001 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:29:20.887008 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:29:20.887020 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:03.469941 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:03.470069 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:03.470076 | orchestrator | 2026-03-05 00:30:03.470110 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-05 00:30:03.470116 | orchestrator | Thursday 05 March 2026 00:29:20 +0000 (0:00:01.144) 0:00:25.223 ******** 2026-03-05 00:30:03.470120 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470126 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470130 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470134 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:03.470138 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:03.470143 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:03.470147 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:03.470151 | orchestrator | 2026-03-05 00:30:03.470155 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-05 00:30:03.470159 | orchestrator | Thursday 05 March 2026 00:29:37 +0000 (0:00:16.770) 0:00:41.993 ******** 2026-03-05 00:30:03.470162 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.470182 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470186 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470190 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470193 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.470197 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.470201 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.470204 | orchestrator | 2026-03-05 00:30:03.470208 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-05 00:30:03.470212 | orchestrator | Thursday 05 March 2026 00:29:37 +0000 (0:00:00.240) 0:00:42.233 ******** 2026-03-05 00:30:03.470216 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.470219 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470223 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470227 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470230 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.470234 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.470238 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.470242 | orchestrator | 2026-03-05 00:30:03.470245 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-05 00:30:03.470249 | orchestrator | Thursday 05 March 2026 00:29:38 +0000 (0:00:00.237) 0:00:42.470 ******** 2026-03-05 00:30:03.470253 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.470257 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470260 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470264 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470268 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.470271 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.470275 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.470279 | orchestrator | 2026-03-05 00:30:03.470283 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-05 00:30:03.470287 | orchestrator | Thursday 05 March 2026 00:29:38 +0000 (0:00:00.209) 0:00:42.679 ******** 2026-03-05 00:30:03.470292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:30:03.470297 | orchestrator | 2026-03-05 00:30:03.470301 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-05 00:30:03.470305 | orchestrator | Thursday 05 March 2026 00:29:38 +0000 (0:00:00.282) 0:00:42.962 ******** 2026-03-05 00:30:03.470309 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.470312 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470316 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470320 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.470324 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.470327 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.470331 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470335 | orchestrator | 2026-03-05 00:30:03.470338 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-05 00:30:03.470342 | orchestrator | Thursday 05 March 2026 00:29:40 +0000 (0:00:01.737) 0:00:44.700 ******** 2026-03-05 00:30:03.470346 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:03.470350 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:03.470353 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:03.470359 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:03.470366 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:03.470371 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:03.470377 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:03.470383 | orchestrator | 2026-03-05 00:30:03.470389 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-05 00:30:03.470395 | orchestrator | Thursday 05 March 2026 00:29:41 +0000 (0:00:01.069) 0:00:45.770 ******** 2026-03-05 00:30:03.470414 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.470421 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470428 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470434 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.470473 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.470481 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470487 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.470493 | orchestrator | 2026-03-05 00:30:03.470496 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-05 00:30:03.470500 | orchestrator | Thursday 05 March 2026 00:29:42 +0000 (0:00:00.904) 0:00:46.674 ******** 2026-03-05 00:30:03.470505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:30:03.470511 | orchestrator | 2026-03-05 00:30:03.470516 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-05 00:30:03.470523 | orchestrator | Thursday 05 March 2026 00:29:42 +0000 (0:00:00.319) 0:00:46.993 ******** 2026-03-05 00:30:03.470530 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:03.470537 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:03.470544 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:03.470552 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:03.470559 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:03.470565 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:03.470571 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:03.470578 | orchestrator | 2026-03-05 00:30:03.470594 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-05 00:30:03.470598 | orchestrator | Thursday 05 March 2026 00:29:43 +0000 (0:00:01.063) 0:00:48.056 ******** 2026-03-05 00:30:03.470605 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:30:03.470612 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:30:03.470618 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:30:03.470625 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:30:03.470632 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:30:03.470638 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:30:03.470644 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:30:03.470650 | orchestrator | 2026-03-05 00:30:03.470654 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-05 00:30:03.470659 | orchestrator | Thursday 05 March 2026 00:29:43 +0000 (0:00:00.228) 0:00:48.284 ******** 2026-03-05 00:30:03.470666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:30:03.470672 | orchestrator | 2026-03-05 00:30:03.470679 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-05 00:30:03.470686 | orchestrator | Thursday 05 March 2026 00:29:44 +0000 (0:00:00.363) 0:00:48.648 ******** 2026-03-05 00:30:03.470692 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.470699 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470705 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470712 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.470719 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.470725 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470732 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.470738 | orchestrator | 2026-03-05 00:30:03.470745 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-05 00:30:03.470749 | orchestrator | Thursday 05 March 2026 00:29:46 +0000 (0:00:01.894) 0:00:50.543 ******** 2026-03-05 00:30:03.470754 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:03.470758 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:03.470763 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:03.470767 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:03.470773 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:03.470779 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:03.470788 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:03.470795 | orchestrator | 2026-03-05 00:30:03.470807 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-05 00:30:03.470815 | orchestrator | Thursday 05 March 2026 00:29:47 +0000 (0:00:01.151) 0:00:51.694 ******** 2026-03-05 00:30:03.470821 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:30:03.470828 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:30:03.470834 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:30:03.470841 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:30:03.470847 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:30:03.470851 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:30:03.470856 | orchestrator | changed: [testbed-manager] 2026-03-05 00:30:03.470860 | orchestrator | 2026-03-05 00:30:03.470865 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-05 00:30:03.470869 | orchestrator | Thursday 05 March 2026 00:30:00 +0000 (0:00:13.392) 0:01:05.086 ******** 2026-03-05 00:30:03.470874 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.470879 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.470883 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470887 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.470892 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470896 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470900 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.470905 | orchestrator | 2026-03-05 00:30:03.470909 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-05 00:30:03.470912 | orchestrator | Thursday 05 March 2026 00:30:01 +0000 (0:00:00.954) 0:01:06.041 ******** 2026-03-05 00:30:03.470917 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470921 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.470924 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470928 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470932 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.470935 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.470939 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.470943 | orchestrator | 2026-03-05 00:30:03.470946 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-05 00:30:03.470951 | orchestrator | Thursday 05 March 2026 00:30:02 +0000 (0:00:00.977) 0:01:07.019 ******** 2026-03-05 00:30:03.470955 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.470962 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.470966 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.470970 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.470974 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.470977 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.470981 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.470985 | orchestrator | 2026-03-05 00:30:03.470989 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-05 00:30:03.470994 | orchestrator | Thursday 05 March 2026 00:30:02 +0000 (0:00:00.254) 0:01:07.273 ******** 2026-03-05 00:30:03.471000 | orchestrator | ok: [testbed-manager] 2026-03-05 00:30:03.471006 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:30:03.471013 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:30:03.471018 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:30:03.471024 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:30:03.471031 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:30:03.471034 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:30:03.471038 | orchestrator | 2026-03-05 00:30:03.471042 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-05 00:30:03.471046 | orchestrator | Thursday 05 March 2026 00:30:03 +0000 (0:00:00.245) 0:01:07.519 ******** 2026-03-05 00:30:03.471050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:30:03.471054 | orchestrator | 2026-03-05 00:30:03.471061 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-05 00:32:45.428279 | orchestrator | Thursday 05 March 2026 00:30:03 +0000 (0:00:00.287) 0:01:07.807 ******** 2026-03-05 00:32:45.428456 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:45.428478 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:45.428490 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:45.428501 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:45.428512 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:45.428523 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:45.428534 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:45.428545 | orchestrator | 2026-03-05 00:32:45.428556 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-05 00:32:45.428568 | orchestrator | Thursday 05 March 2026 00:30:05 +0000 (0:00:02.048) 0:01:09.855 ******** 2026-03-05 00:32:45.428579 | orchestrator | changed: [testbed-manager] 2026-03-05 00:32:45.428591 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:45.428602 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:45.428613 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:45.428623 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:45.428634 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:45.428645 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:45.428656 | orchestrator | 2026-03-05 00:32:45.428667 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-05 00:32:45.428678 | orchestrator | Thursday 05 March 2026 00:30:06 +0000 (0:00:00.625) 0:01:10.481 ******** 2026-03-05 00:32:45.428689 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:45.428700 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:45.428711 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:45.428722 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:45.428732 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:45.428743 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:45.428754 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:45.428765 | orchestrator | 2026-03-05 00:32:45.428778 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-05 00:32:45.428792 | orchestrator | Thursday 05 March 2026 00:30:06 +0000 (0:00:00.230) 0:01:10.712 ******** 2026-03-05 00:32:45.428805 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:45.428818 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:45.428831 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:45.428844 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:45.428856 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:45.428870 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:45.428882 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:45.428895 | orchestrator | 2026-03-05 00:32:45.428908 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-05 00:32:45.428920 | orchestrator | Thursday 05 March 2026 00:30:07 +0000 (0:00:01.037) 0:01:11.749 ******** 2026-03-05 00:32:45.428940 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:45.428960 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:45.428980 | orchestrator | changed: [testbed-manager] 2026-03-05 00:32:45.428995 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:45.429009 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:45.429022 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:45.429035 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:45.429048 | orchestrator | 2026-03-05 00:32:45.429060 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-05 00:32:45.429075 | orchestrator | Thursday 05 March 2026 00:30:08 +0000 (0:00:01.577) 0:01:13.327 ******** 2026-03-05 00:32:45.429086 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:45.429097 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:45.429108 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:45.429119 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:45.429130 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:45.429141 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:45.429152 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:45.429162 | orchestrator | 2026-03-05 00:32:45.429173 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-05 00:32:45.429210 | orchestrator | Thursday 05 March 2026 00:30:11 +0000 (0:00:02.443) 0:01:15.770 ******** 2026-03-05 00:32:45.429221 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:45.429232 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:45.429243 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:45.429253 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:45.429264 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:45.429275 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:45.429285 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:45.429296 | orchestrator | 2026-03-05 00:32:45.429307 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-05 00:32:45.429318 | orchestrator | Thursday 05 March 2026 00:30:59 +0000 (0:00:47.961) 0:02:03.731 ******** 2026-03-05 00:32:45.429329 | orchestrator | changed: [testbed-manager] 2026-03-05 00:32:45.429340 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:32:45.429373 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:32:45.429392 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:32:45.429412 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:32:45.429426 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:32:45.429437 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:32:45.429448 | orchestrator | 2026-03-05 00:32:45.429459 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-05 00:32:45.429475 | orchestrator | Thursday 05 March 2026 00:32:29 +0000 (0:01:30.019) 0:03:33.751 ******** 2026-03-05 00:32:45.429495 | orchestrator | ok: [testbed-manager] 2026-03-05 00:32:45.429509 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:45.429520 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:45.429531 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:45.429541 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:45.429552 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:45.429563 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:45.429574 | orchestrator | 2026-03-05 00:32:45.429584 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-05 00:32:45.429595 | orchestrator | Thursday 05 March 2026 00:32:31 +0000 (0:00:01.809) 0:03:35.560 ******** 2026-03-05 00:32:45.429607 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:32:45.429618 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:32:45.429628 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:32:45.429639 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:32:45.429650 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:32:45.429660 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:32:45.429671 | orchestrator | changed: [testbed-manager] 2026-03-05 00:32:45.429682 | orchestrator | 2026-03-05 00:32:45.429693 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-05 00:32:45.429704 | orchestrator | Thursday 05 March 2026 00:32:43 +0000 (0:00:11.972) 0:03:47.532 ******** 2026-03-05 00:32:45.429758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-05 00:32:45.429808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-05 00:32:45.429831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-05 00:32:45.429855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-05 00:32:45.429867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-05 00:32:45.429878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-05 00:32:45.429889 | orchestrator | 2026-03-05 00:32:45.429900 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-05 00:32:45.429911 | orchestrator | Thursday 05 March 2026 00:32:43 +0000 (0:00:00.440) 0:03:47.973 ******** 2026-03-05 00:32:45.429923 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-05 00:32:45.429941 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-05 00:32:45.429959 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:45.429971 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-05 00:32:45.429982 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:45.429997 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-05 00:32:45.430008 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:32:45.430086 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:32:45.430106 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 00:32:45.430118 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 00:32:45.430128 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 00:32:45.430139 | orchestrator | 2026-03-05 00:32:45.430150 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-05 00:32:45.430160 | orchestrator | Thursday 05 March 2026 00:32:45 +0000 (0:00:01.722) 0:03:49.696 ******** 2026-03-05 00:32:45.430171 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-05 00:32:45.430184 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-05 00:32:45.430194 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-05 00:32:45.430205 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-05 00:32:45.430216 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-05 00:32:45.430237 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-05 00:32:51.301088 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-05 00:32:51.301219 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-05 00:32:51.301246 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-05 00:32:51.301295 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-05 00:32:51.301307 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-05 00:32:51.301318 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-05 00:32:51.301329 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-05 00:32:51.301340 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-05 00:32:51.301391 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-05 00:32:51.301403 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-05 00:32:51.301415 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-05 00:32:51.301427 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-05 00:32:51.301437 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-05 00:32:51.301448 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-05 00:32:51.301459 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-05 00:32:51.301470 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-05 00:32:51.301480 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-05 00:32:51.301491 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-05 00:32:51.301502 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-05 00:32:51.301513 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-05 00:32:51.301524 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-05 00:32:51.301535 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:51.301547 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-05 00:32:51.301558 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-05 00:32:51.301568 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-05 00:32:51.301579 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-05 00:32:51.301590 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-05 00:32:51.301600 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-05 00:32:51.301611 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-05 00:32:51.301623 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:32:51.301635 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-05 00:32:51.301663 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-05 00:32:51.301676 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-05 00:32:51.301689 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-05 00:32:51.301702 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-05 00:32:51.301714 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-05 00:32:51.301737 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:32:51.301749 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:32:51.301762 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-05 00:32:51.301774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-05 00:32:51.301787 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-05 00:32:51.301799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-05 00:32:51.301812 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-05 00:32:51.301842 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-05 00:32:51.301856 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-05 00:32:51.301869 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-05 00:32:51.301881 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-05 00:32:51.301893 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-05 00:32:51.301906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-05 00:32:51.301919 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-05 00:32:51.301931 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-05 00:32:51.301943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-05 00:32:51.301954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-05 00:32:51.301968 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-05 00:32:51.301980 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-05 00:32:51.301992 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-05 00:32:51.302003 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-05 00:32:51.302013 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-05 00:32:51.302095 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-05 00:32:51.302106 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-05 00:32:51.302121 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-05 00:32:51.302140 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-05 00:32:51.302160 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-05 00:32:51.302178 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-05 00:32:51.302198 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-05 00:32:51.302215 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-05 00:32:51.302235 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-05 00:32:51.302254 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-05 00:32:51.302274 | orchestrator | 2026-03-05 00:32:51.302286 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-05 00:32:51.302306 | orchestrator | Thursday 05 March 2026 00:32:49 +0000 (0:00:03.851) 0:03:53.547 ******** 2026-03-05 00:32:51.302317 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:32:51.302329 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:32:51.302339 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:32:51.302409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:32:51.302422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:32:51.302440 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:32:51.302451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-05 00:32:51.302462 | orchestrator | 2026-03-05 00:32:51.302473 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-05 00:32:51.302483 | orchestrator | Thursday 05 March 2026 00:32:50 +0000 (0:00:01.601) 0:03:55.148 ******** 2026-03-05 00:32:51.302494 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:32:51.302504 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:32:51.302515 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:32:51.302526 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:32:51.302537 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:32:51.302547 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:32:51.302558 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:32:51.302569 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:32:51.302580 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:32:51.302590 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:32:51.302611 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:33:04.816818 | orchestrator | 2026-03-05 00:33:04.816920 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-05 00:33:04.816932 | orchestrator | Thursday 05 March 2026 00:32:51 +0000 (0:00:00.489) 0:03:55.638 ******** 2026-03-05 00:33:04.816939 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:33:04.816948 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:33:04.816956 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:04.816964 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:33:04.816971 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:33:04.816977 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-05 00:33:04.816984 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:33:04.816991 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:33:04.816999 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:33:04.817005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:33:04.817012 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-05 00:33:04.817019 | orchestrator | 2026-03-05 00:33:04.817026 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-05 00:33:04.817054 | orchestrator | Thursday 05 March 2026 00:32:51 +0000 (0:00:00.582) 0:03:56.220 ******** 2026-03-05 00:33:04.817062 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-05 00:33:04.817068 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:04.817075 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-05 00:33:04.817081 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-05 00:33:04.817087 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:33:04.817094 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:33:04.817100 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-05 00:33:04.817107 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:33:04.817113 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-05 00:33:04.817120 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-05 00:33:04.817127 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-05 00:33:04.817133 | orchestrator | 2026-03-05 00:33:04.817141 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-05 00:33:04.817147 | orchestrator | Thursday 05 March 2026 00:32:52 +0000 (0:00:00.588) 0:03:56.809 ******** 2026-03-05 00:33:04.817154 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:04.817161 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:33:04.817167 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:33:04.817174 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:33:04.817181 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:33:04.817187 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:33:04.817194 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:33:04.817201 | orchestrator | 2026-03-05 00:33:04.817209 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-05 00:33:04.817217 | orchestrator | Thursday 05 March 2026 00:32:52 +0000 (0:00:00.330) 0:03:57.140 ******** 2026-03-05 00:33:04.817226 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:04.817234 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:04.817242 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:04.817249 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:04.817257 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:04.817265 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:04.817273 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:04.817281 | orchestrator | 2026-03-05 00:33:04.817288 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-05 00:33:04.817296 | orchestrator | Thursday 05 March 2026 00:32:58 +0000 (0:00:05.458) 0:04:02.599 ******** 2026-03-05 00:33:04.817303 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-05 00:33:04.817311 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-05 00:33:04.817317 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:04.817323 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:33:04.817329 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-05 00:33:04.817334 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-05 00:33:04.817362 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:33:04.817369 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-05 00:33:04.817375 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:33:04.817400 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-05 00:33:04.817406 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:33:04.817413 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:33:04.817419 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-05 00:33:04.817425 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:33:04.817431 | orchestrator | 2026-03-05 00:33:04.817437 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-05 00:33:04.817450 | orchestrator | Thursday 05 March 2026 00:32:58 +0000 (0:00:00.313) 0:04:02.912 ******** 2026-03-05 00:33:04.817457 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-05 00:33:04.817463 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-05 00:33:04.817469 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-05 00:33:04.817494 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-05 00:33:04.817500 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-05 00:33:04.817506 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-05 00:33:04.817512 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-05 00:33:04.817518 | orchestrator | 2026-03-05 00:33:04.817524 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-05 00:33:04.817530 | orchestrator | Thursday 05 March 2026 00:32:59 +0000 (0:00:01.284) 0:04:04.197 ******** 2026-03-05 00:33:04.817586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:33:04.817596 | orchestrator | 2026-03-05 00:33:04.817602 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-05 00:33:04.817608 | orchestrator | Thursday 05 March 2026 00:33:00 +0000 (0:00:00.576) 0:04:04.773 ******** 2026-03-05 00:33:04.817614 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:04.817620 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:04.817626 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:04.817632 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:04.817638 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:04.817644 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:04.817650 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:04.817655 | orchestrator | 2026-03-05 00:33:04.817661 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-05 00:33:04.817667 | orchestrator | Thursday 05 March 2026 00:33:01 +0000 (0:00:01.342) 0:04:06.115 ******** 2026-03-05 00:33:04.817673 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:04.817679 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:04.817685 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:04.817690 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:04.817696 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:04.817701 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:04.817707 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:04.817713 | orchestrator | 2026-03-05 00:33:04.817720 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-05 00:33:04.817725 | orchestrator | Thursday 05 March 2026 00:33:02 +0000 (0:00:00.704) 0:04:06.819 ******** 2026-03-05 00:33:04.817732 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:04.817738 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:04.817745 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:04.817751 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:04.817757 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:04.817763 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:04.817769 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:04.817774 | orchestrator | 2026-03-05 00:33:04.817780 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-05 00:33:04.817786 | orchestrator | Thursday 05 March 2026 00:33:03 +0000 (0:00:00.618) 0:04:07.438 ******** 2026-03-05 00:33:04.817792 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:04.817798 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:04.817804 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:04.817810 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:04.817815 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:04.817821 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:04.817827 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:04.817833 | orchestrator | 2026-03-05 00:33:04.817839 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-05 00:33:04.817855 | orchestrator | Thursday 05 March 2026 00:33:03 +0000 (0:00:00.640) 0:04:08.079 ******** 2026-03-05 00:33:04.817891 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669085.7619066, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:04.817901 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669123.0602882, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:04.817908 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669113.8333871, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:04.817937 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669103.1403804, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760629 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669096.843784, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760713 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669107.9886682, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760724 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772669102.751447, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760752 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760771 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760779 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760785 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760812 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760820 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760827 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 00:33:09.760839 | orchestrator | 2026-03-05 00:33:09.760848 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-05 00:33:09.760856 | orchestrator | Thursday 05 March 2026 00:33:04 +0000 (0:00:01.070) 0:04:09.150 ******** 2026-03-05 00:33:09.760863 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:09.760872 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:09.760878 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:09.760885 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:09.760892 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:09.760899 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:09.760906 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:09.760913 | orchestrator | 2026-03-05 00:33:09.760919 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-05 00:33:09.760925 | orchestrator | Thursday 05 March 2026 00:33:05 +0000 (0:00:01.125) 0:04:10.276 ******** 2026-03-05 00:33:09.760932 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:09.760938 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:09.760944 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:09.760950 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:09.760957 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:09.760963 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:09.760969 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:09.760975 | orchestrator | 2026-03-05 00:33:09.760986 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-05 00:33:09.760992 | orchestrator | Thursday 05 March 2026 00:33:07 +0000 (0:00:01.190) 0:04:11.466 ******** 2026-03-05 00:33:09.760999 | orchestrator | changed: [testbed-manager] 2026-03-05 00:33:09.761005 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:33:09.761011 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:33:09.761017 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:33:09.761024 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:33:09.761030 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:33:09.761036 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:33:09.761042 | orchestrator | 2026-03-05 00:33:09.761048 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-05 00:33:09.761055 | orchestrator | Thursday 05 March 2026 00:33:08 +0000 (0:00:01.200) 0:04:12.667 ******** 2026-03-05 00:33:09.761061 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:33:09.761067 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:33:09.761073 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:33:09.761080 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:33:09.761086 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:33:09.761092 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:33:09.761098 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:33:09.761104 | orchestrator | 2026-03-05 00:33:09.761111 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-05 00:33:09.761117 | orchestrator | Thursday 05 March 2026 00:33:08 +0000 (0:00:00.286) 0:04:12.953 ******** 2026-03-05 00:33:09.761123 | orchestrator | ok: [testbed-manager] 2026-03-05 00:33:09.761131 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:33:09.761137 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:33:09.761143 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:33:09.761149 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:33:09.761156 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:33:09.761162 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:33:09.761168 | orchestrator | 2026-03-05 00:33:09.761174 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-05 00:33:09.761181 | orchestrator | Thursday 05 March 2026 00:33:09 +0000 (0:00:00.756) 0:04:13.710 ******** 2026-03-05 00:33:09.761190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:33:09.761203 | orchestrator | 2026-03-05 00:33:09.761211 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-05 00:33:09.761223 | orchestrator | Thursday 05 March 2026 00:33:09 +0000 (0:00:00.390) 0:04:14.100 ******** 2026-03-05 00:34:27.706993 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:27.707102 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:27.707118 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:27.707130 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:27.707141 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:27.707152 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:27.707163 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:27.707174 | orchestrator | 2026-03-05 00:34:27.707186 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-05 00:34:27.707198 | orchestrator | Thursday 05 March 2026 00:33:18 +0000 (0:00:08.420) 0:04:22.521 ******** 2026-03-05 00:34:27.707209 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:27.707220 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:27.707230 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:27.707241 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:27.707251 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:27.707262 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:27.707332 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:27.707344 | orchestrator | 2026-03-05 00:34:27.707355 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-05 00:34:27.707366 | orchestrator | Thursday 05 March 2026 00:33:19 +0000 (0:00:01.244) 0:04:23.765 ******** 2026-03-05 00:34:27.707376 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:27.707387 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:27.707398 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:27.707409 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:27.707420 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:27.707431 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:27.707442 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:27.707452 | orchestrator | 2026-03-05 00:34:27.707463 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-05 00:34:27.707474 | orchestrator | Thursday 05 March 2026 00:33:20 +0000 (0:00:01.166) 0:04:24.932 ******** 2026-03-05 00:34:27.707485 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:27.707496 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:27.707506 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:27.707517 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:27.707528 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:27.707541 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:27.707555 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:27.707568 | orchestrator | 2026-03-05 00:34:27.707581 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-05 00:34:27.707594 | orchestrator | Thursday 05 March 2026 00:33:20 +0000 (0:00:00.287) 0:04:25.220 ******** 2026-03-05 00:34:27.707606 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:27.707619 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:27.707631 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:27.707643 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:27.707656 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:27.707668 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:27.707681 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:27.707693 | orchestrator | 2026-03-05 00:34:27.707706 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-05 00:34:27.707719 | orchestrator | Thursday 05 March 2026 00:33:21 +0000 (0:00:00.299) 0:04:25.520 ******** 2026-03-05 00:34:27.707732 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:27.707744 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:27.707757 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:27.707769 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:27.707805 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:27.707818 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:27.707830 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:27.707842 | orchestrator | 2026-03-05 00:34:27.707854 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-05 00:34:27.707867 | orchestrator | Thursday 05 March 2026 00:33:21 +0000 (0:00:00.265) 0:04:25.785 ******** 2026-03-05 00:34:27.707880 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:27.707891 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:27.707901 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:27.707912 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:27.707922 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:27.707933 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:27.707943 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:27.707954 | orchestrator | 2026-03-05 00:34:27.707964 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-05 00:34:27.707975 | orchestrator | Thursday 05 March 2026 00:33:26 +0000 (0:00:05.486) 0:04:31.271 ******** 2026-03-05 00:34:27.707987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:34:27.708000 | orchestrator | 2026-03-05 00:34:27.708011 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-05 00:34:27.708022 | orchestrator | Thursday 05 March 2026 00:33:27 +0000 (0:00:00.457) 0:04:31.728 ******** 2026-03-05 00:34:27.708033 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-05 00:34:27.708043 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-05 00:34:27.708055 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:34:27.708066 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-05 00:34:27.708076 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-05 00:34:27.708112 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-05 00:34:27.708131 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:27.708149 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-05 00:34:27.708168 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-05 00:34:27.708179 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-05 00:34:27.708190 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:27.708201 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:34:27.708211 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-05 00:34:27.708222 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-05 00:34:27.708233 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-05 00:34:27.708243 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-05 00:34:27.708289 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:34:27.708302 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:34:27.708312 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-05 00:34:27.708323 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-05 00:34:27.708334 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:34:27.708345 | orchestrator | 2026-03-05 00:34:27.708356 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-05 00:34:27.708366 | orchestrator | Thursday 05 March 2026 00:33:27 +0000 (0:00:00.375) 0:04:32.103 ******** 2026-03-05 00:34:27.708377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:34:27.708388 | orchestrator | 2026-03-05 00:34:27.708399 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-05 00:34:27.708410 | orchestrator | Thursday 05 March 2026 00:33:28 +0000 (0:00:00.445) 0:04:32.549 ******** 2026-03-05 00:34:27.708430 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-05 00:34:27.708441 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-05 00:34:27.708452 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:34:27.708463 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-05 00:34:27.708473 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:27.708484 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:27.708495 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-05 00:34:27.708505 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-05 00:34:27.708516 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:34:27.708526 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-05 00:34:27.708537 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:34:27.708548 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:34:27.708558 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-05 00:34:27.708569 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:34:27.708580 | orchestrator | 2026-03-05 00:34:27.708590 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-05 00:34:27.708601 | orchestrator | Thursday 05 March 2026 00:33:28 +0000 (0:00:00.314) 0:04:32.863 ******** 2026-03-05 00:34:27.708612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:34:27.708623 | orchestrator | 2026-03-05 00:34:27.708634 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-05 00:34:27.708644 | orchestrator | Thursday 05 March 2026 00:33:28 +0000 (0:00:00.433) 0:04:33.297 ******** 2026-03-05 00:34:27.708655 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:27.708666 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:27.708677 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:27.708688 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:27.708698 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:27.708714 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:27.708726 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:27.708736 | orchestrator | 2026-03-05 00:34:27.708747 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-05 00:34:27.708758 | orchestrator | Thursday 05 March 2026 00:34:03 +0000 (0:00:34.535) 0:05:07.833 ******** 2026-03-05 00:34:27.708768 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:27.708793 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:27.708805 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:27.708826 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:27.708837 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:27.708847 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:27.708858 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:27.708868 | orchestrator | 2026-03-05 00:34:27.708879 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-05 00:34:27.708890 | orchestrator | Thursday 05 March 2026 00:34:12 +0000 (0:00:08.845) 0:05:16.678 ******** 2026-03-05 00:34:27.708901 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:27.708911 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:27.708922 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:27.708932 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:27.708943 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:27.708953 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:27.708964 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:27.708974 | orchestrator | 2026-03-05 00:34:27.708985 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-05 00:34:27.708996 | orchestrator | Thursday 05 March 2026 00:34:20 +0000 (0:00:07.837) 0:05:24.516 ******** 2026-03-05 00:34:27.709013 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:27.709024 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:27.709034 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:27.709045 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:27.709069 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:27.709080 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:27.709100 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:27.709111 | orchestrator | 2026-03-05 00:34:27.709122 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-05 00:34:27.709133 | orchestrator | Thursday 05 March 2026 00:34:21 +0000 (0:00:01.773) 0:05:26.289 ******** 2026-03-05 00:34:27.709144 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:27.709155 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:27.709165 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:27.709176 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:27.709187 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:27.709198 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:27.709208 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:27.709219 | orchestrator | 2026-03-05 00:34:27.709237 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-05 00:34:38.900637 | orchestrator | Thursday 05 March 2026 00:34:27 +0000 (0:00:05.747) 0:05:32.037 ******** 2026-03-05 00:34:38.900783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:34:38.900815 | orchestrator | 2026-03-05 00:34:38.900837 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-05 00:34:38.900858 | orchestrator | Thursday 05 March 2026 00:34:28 +0000 (0:00:00.600) 0:05:32.637 ******** 2026-03-05 00:34:38.900877 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:38.900898 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:38.900916 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:38.900935 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:38.900954 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:38.900971 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:38.900988 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:38.901006 | orchestrator | 2026-03-05 00:34:38.901025 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-05 00:34:38.901043 | orchestrator | Thursday 05 March 2026 00:34:29 +0000 (0:00:00.791) 0:05:33.428 ******** 2026-03-05 00:34:38.901062 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:38.901083 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:38.901102 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:38.901120 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:38.901137 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:38.901155 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:38.901174 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:38.901192 | orchestrator | 2026-03-05 00:34:38.901212 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-05 00:34:38.901234 | orchestrator | Thursday 05 March 2026 00:34:30 +0000 (0:00:01.670) 0:05:35.099 ******** 2026-03-05 00:34:38.901254 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:34:38.901302 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:34:38.901323 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:34:38.901344 | orchestrator | changed: [testbed-manager] 2026-03-05 00:34:38.901364 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:34:38.901383 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:34:38.901403 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:34:38.901423 | orchestrator | 2026-03-05 00:34:38.901444 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-05 00:34:38.901463 | orchestrator | Thursday 05 March 2026 00:34:31 +0000 (0:00:00.785) 0:05:35.884 ******** 2026-03-05 00:34:38.901483 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:34:38.901533 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:38.901553 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:38.901571 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:34:38.901591 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:34:38.901609 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:34:38.901626 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:34:38.901645 | orchestrator | 2026-03-05 00:34:38.901664 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-05 00:34:38.901683 | orchestrator | Thursday 05 March 2026 00:34:31 +0000 (0:00:00.289) 0:05:36.174 ******** 2026-03-05 00:34:38.901701 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:34:38.901719 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:38.901736 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:38.901753 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:34:38.901790 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:34:38.901809 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:34:38.901828 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:34:38.901848 | orchestrator | 2026-03-05 00:34:38.901866 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-05 00:34:38.901885 | orchestrator | Thursday 05 March 2026 00:34:32 +0000 (0:00:00.384) 0:05:36.559 ******** 2026-03-05 00:34:38.901903 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:38.901922 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:38.901941 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:38.901959 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:38.901975 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:38.901995 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:38.902088 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:38.902116 | orchestrator | 2026-03-05 00:34:38.902135 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-05 00:34:38.902156 | orchestrator | Thursday 05 March 2026 00:34:32 +0000 (0:00:00.243) 0:05:36.803 ******** 2026-03-05 00:34:38.902177 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:34:38.902196 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:38.902216 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:38.902237 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:34:38.902256 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:34:38.902302 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:34:38.902321 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:34:38.902339 | orchestrator | 2026-03-05 00:34:38.902359 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-05 00:34:38.902380 | orchestrator | Thursday 05 March 2026 00:34:32 +0000 (0:00:00.257) 0:05:37.061 ******** 2026-03-05 00:34:38.902399 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:38.902418 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:38.902437 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:38.902456 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:38.902476 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:38.902495 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:38.902515 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:38.902533 | orchestrator | 2026-03-05 00:34:38.902553 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-05 00:34:38.902572 | orchestrator | Thursday 05 March 2026 00:34:33 +0000 (0:00:00.311) 0:05:37.372 ******** 2026-03-05 00:34:38.902591 | orchestrator | ok: [testbed-manager] =>  2026-03-05 00:34:38.902610 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:34:38.902629 | orchestrator | ok: [testbed-node-3] =>  2026-03-05 00:34:38.902647 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:34:38.902666 | orchestrator | ok: [testbed-node-4] =>  2026-03-05 00:34:38.902685 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:34:38.902703 | orchestrator | ok: [testbed-node-5] =>  2026-03-05 00:34:38.902720 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:34:38.902764 | orchestrator | ok: [testbed-node-0] =>  2026-03-05 00:34:38.902804 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:34:38.902823 | orchestrator | ok: [testbed-node-1] =>  2026-03-05 00:34:38.902842 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:34:38.902860 | orchestrator | ok: [testbed-node-2] =>  2026-03-05 00:34:38.902878 | orchestrator |  docker_version: 5:27.5.1 2026-03-05 00:34:38.902897 | orchestrator | 2026-03-05 00:34:38.902916 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-05 00:34:38.902935 | orchestrator | Thursday 05 March 2026 00:34:33 +0000 (0:00:00.281) 0:05:37.654 ******** 2026-03-05 00:34:38.902954 | orchestrator | ok: [testbed-manager] =>  2026-03-05 00:34:38.902973 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:34:38.902991 | orchestrator | ok: [testbed-node-3] =>  2026-03-05 00:34:38.903009 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:34:38.903027 | orchestrator | ok: [testbed-node-4] =>  2026-03-05 00:34:38.903045 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:34:38.903064 | orchestrator | ok: [testbed-node-5] =>  2026-03-05 00:34:38.903083 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:34:38.903101 | orchestrator | ok: [testbed-node-0] =>  2026-03-05 00:34:38.903118 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:34:38.903137 | orchestrator | ok: [testbed-node-1] =>  2026-03-05 00:34:38.903156 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:34:38.903174 | orchestrator | ok: [testbed-node-2] =>  2026-03-05 00:34:38.903193 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-05 00:34:38.903209 | orchestrator | 2026-03-05 00:34:38.903226 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-05 00:34:38.903245 | orchestrator | Thursday 05 March 2026 00:34:33 +0000 (0:00:00.304) 0:05:37.958 ******** 2026-03-05 00:34:38.903292 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:34:38.903313 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:38.903332 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:38.903352 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:34:38.903371 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:34:38.903390 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:34:38.903410 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:34:38.903429 | orchestrator | 2026-03-05 00:34:38.903448 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-05 00:34:38.903467 | orchestrator | Thursday 05 March 2026 00:34:33 +0000 (0:00:00.263) 0:05:38.221 ******** 2026-03-05 00:34:38.903486 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:34:38.903505 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:38.903524 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:38.903542 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:34:38.903562 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:34:38.903582 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:34:38.903601 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:34:38.903621 | orchestrator | 2026-03-05 00:34:38.903641 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-05 00:34:38.903660 | orchestrator | Thursday 05 March 2026 00:34:34 +0000 (0:00:00.250) 0:05:38.472 ******** 2026-03-05 00:34:38.903682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:34:38.903704 | orchestrator | 2026-03-05 00:34:38.903723 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-05 00:34:38.903754 | orchestrator | Thursday 05 March 2026 00:34:34 +0000 (0:00:00.392) 0:05:38.865 ******** 2026-03-05 00:34:38.903775 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:38.903795 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:38.903815 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:38.903835 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:38.903853 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:38.903872 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:38.903907 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:38.903926 | orchestrator | 2026-03-05 00:34:38.903945 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-05 00:34:38.903965 | orchestrator | Thursday 05 March 2026 00:34:35 +0000 (0:00:00.968) 0:05:39.834 ******** 2026-03-05 00:34:38.903978 | orchestrator | ok: [testbed-manager] 2026-03-05 00:34:38.903988 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:34:38.903999 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:34:38.904013 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:34:38.904031 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:34:38.904051 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:34:38.904064 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:34:38.904075 | orchestrator | 2026-03-05 00:34:38.904086 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-05 00:34:38.904099 | orchestrator | Thursday 05 March 2026 00:34:38 +0000 (0:00:03.034) 0:05:42.868 ******** 2026-03-05 00:34:38.904110 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-05 00:34:38.904121 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-05 00:34:38.904132 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-05 00:34:38.904143 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-05 00:34:38.904154 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-05 00:34:38.904164 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-05 00:34:38.904173 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:34:38.904183 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-05 00:34:38.904192 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-05 00:34:38.904201 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-05 00:34:38.904211 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:34:38.904220 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-05 00:34:38.904230 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-05 00:34:38.904239 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-05 00:34:38.904249 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:34:38.904295 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-05 00:34:38.904320 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-05 00:35:39.424074 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-05 00:35:39.424268 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:39.424296 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-05 00:35:39.424316 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-05 00:35:39.424334 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-05 00:35:39.424354 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:39.424373 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:39.424392 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-05 00:35:39.424410 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-05 00:35:39.424428 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-05 00:35:39.424446 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:39.424466 | orchestrator | 2026-03-05 00:35:39.424485 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-05 00:35:39.424505 | orchestrator | Thursday 05 March 2026 00:34:39 +0000 (0:00:00.565) 0:05:43.434 ******** 2026-03-05 00:35:39.424520 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:39.424536 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.424554 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.424573 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.424592 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.424612 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.424632 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.424684 | orchestrator | 2026-03-05 00:35:39.424705 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-05 00:35:39.424725 | orchestrator | Thursday 05 March 2026 00:34:45 +0000 (0:00:06.804) 0:05:50.239 ******** 2026-03-05 00:35:39.424744 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.424762 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:39.424781 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.424800 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.424820 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.424839 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.424856 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.424869 | orchestrator | 2026-03-05 00:35:39.424881 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-05 00:35:39.424892 | orchestrator | Thursday 05 March 2026 00:34:46 +0000 (0:00:01.054) 0:05:51.294 ******** 2026-03-05 00:35:39.424903 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:39.424913 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.424924 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.424934 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.424945 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.424956 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.424966 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.424977 | orchestrator | 2026-03-05 00:35:39.424987 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-05 00:35:39.424998 | orchestrator | Thursday 05 March 2026 00:34:55 +0000 (0:00:08.548) 0:05:59.842 ******** 2026-03-05 00:35:39.425009 | orchestrator | changed: [testbed-manager] 2026-03-05 00:35:39.425019 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.425030 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.425041 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.425051 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.425062 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.425072 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.425083 | orchestrator | 2026-03-05 00:35:39.425094 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-05 00:35:39.425105 | orchestrator | Thursday 05 March 2026 00:34:58 +0000 (0:00:03.259) 0:06:03.102 ******** 2026-03-05 00:35:39.425116 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:39.425127 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.425137 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.425179 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.425190 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.425201 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.425211 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.425222 | orchestrator | 2026-03-05 00:35:39.425233 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-05 00:35:39.425244 | orchestrator | Thursday 05 March 2026 00:35:00 +0000 (0:00:01.252) 0:06:04.355 ******** 2026-03-05 00:35:39.425255 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:39.425265 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.425276 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.425287 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.425297 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.425308 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.425319 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.425330 | orchestrator | 2026-03-05 00:35:39.425341 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-05 00:35:39.425352 | orchestrator | Thursday 05 March 2026 00:35:01 +0000 (0:00:01.484) 0:06:05.839 ******** 2026-03-05 00:35:39.425362 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:39.425373 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:39.425384 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:39.425394 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:39.425409 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:39.425441 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:39.425459 | orchestrator | changed: [testbed-manager] 2026-03-05 00:35:39.425477 | orchestrator | 2026-03-05 00:35:39.425496 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-05 00:35:39.425516 | orchestrator | Thursday 05 March 2026 00:35:02 +0000 (0:00:00.617) 0:06:06.457 ******** 2026-03-05 00:35:39.425535 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:39.425551 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.425562 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.425572 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.425583 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.425593 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.425604 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.425614 | orchestrator | 2026-03-05 00:35:39.425625 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-05 00:35:39.425657 | orchestrator | Thursday 05 March 2026 00:35:11 +0000 (0:00:09.757) 0:06:16.214 ******** 2026-03-05 00:35:39.425750 | orchestrator | changed: [testbed-manager] 2026-03-05 00:35:39.425763 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.425774 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.425785 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.425796 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.425806 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.425817 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.425827 | orchestrator | 2026-03-05 00:35:39.425838 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-05 00:35:39.425849 | orchestrator | Thursday 05 March 2026 00:35:12 +0000 (0:00:00.906) 0:06:17.121 ******** 2026-03-05 00:35:39.425860 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:39.425871 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.425882 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.425892 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.425903 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.425914 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.425924 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.425935 | orchestrator | 2026-03-05 00:35:39.425946 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-05 00:35:39.425956 | orchestrator | Thursday 05 March 2026 00:35:22 +0000 (0:00:09.324) 0:06:26.445 ******** 2026-03-05 00:35:39.425967 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:39.425978 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.425988 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.425999 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.426010 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.426104 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.426119 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.426204 | orchestrator | 2026-03-05 00:35:39.426232 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-05 00:35:39.426252 | orchestrator | Thursday 05 March 2026 00:35:33 +0000 (0:00:11.047) 0:06:37.492 ******** 2026-03-05 00:35:39.426271 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-05 00:35:39.426287 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-05 00:35:39.426298 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-05 00:35:39.426309 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-05 00:35:39.426319 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-05 00:35:39.426330 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-05 00:35:39.426341 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-05 00:35:39.426351 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-05 00:35:39.426362 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-05 00:35:39.426373 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-05 00:35:39.426395 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-05 00:35:39.426457 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-05 00:35:39.426470 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-05 00:35:39.426480 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-05 00:35:39.426491 | orchestrator | 2026-03-05 00:35:39.426502 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-05 00:35:39.426512 | orchestrator | Thursday 05 March 2026 00:35:34 +0000 (0:00:01.193) 0:06:38.686 ******** 2026-03-05 00:35:39.426523 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:39.426539 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:39.426550 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:39.426560 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:39.426571 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:39.426582 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:39.426592 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:39.426603 | orchestrator | 2026-03-05 00:35:39.426614 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-05 00:35:39.426625 | orchestrator | Thursday 05 March 2026 00:35:34 +0000 (0:00:00.494) 0:06:39.180 ******** 2026-03-05 00:35:39.426636 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:39.426646 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:39.426657 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:39.426667 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:39.426678 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:39.426689 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:39.426699 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:39.426710 | orchestrator | 2026-03-05 00:35:39.426721 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-05 00:35:39.426734 | orchestrator | Thursday 05 March 2026 00:35:38 +0000 (0:00:03.694) 0:06:42.875 ******** 2026-03-05 00:35:39.426745 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:39.426755 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:39.426766 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:39.426776 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:39.426787 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:39.426798 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:39.426808 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:39.426819 | orchestrator | 2026-03-05 00:35:39.426830 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-05 00:35:39.426841 | orchestrator | Thursday 05 March 2026 00:35:38 +0000 (0:00:00.471) 0:06:43.347 ******** 2026-03-05 00:35:39.426852 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-05 00:35:39.426863 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-05 00:35:39.426874 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:39.426885 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-05 00:35:39.426895 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-05 00:35:39.426906 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:39.426917 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-05 00:35:39.426928 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-05 00:35:39.426938 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-05 00:35:39.426962 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-05 00:35:58.061344 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:58.061462 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-05 00:35:58.061477 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-05 00:35:58.061490 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:58.061510 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-05 00:35:58.061562 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-05 00:35:58.061583 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:58.061603 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:58.061636 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-05 00:35:58.061658 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-05 00:35:58.061675 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:58.061695 | orchestrator | 2026-03-05 00:35:58.061717 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-05 00:35:58.061740 | orchestrator | Thursday 05 March 2026 00:35:39 +0000 (0:00:00.653) 0:06:44.000 ******** 2026-03-05 00:35:58.061759 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:58.061776 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:58.061787 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:58.061798 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:58.061809 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:58.061819 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:58.061830 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:58.061846 | orchestrator | 2026-03-05 00:35:58.061865 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-05 00:35:58.061884 | orchestrator | Thursday 05 March 2026 00:35:40 +0000 (0:00:00.491) 0:06:44.492 ******** 2026-03-05 00:35:58.061904 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:58.061918 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:58.061930 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:58.061943 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:58.061955 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:58.061968 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:58.061980 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:58.061993 | orchestrator | 2026-03-05 00:35:58.062005 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-05 00:35:58.062093 | orchestrator | Thursday 05 March 2026 00:35:40 +0000 (0:00:00.490) 0:06:44.983 ******** 2026-03-05 00:35:58.062106 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:58.062119 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:35:58.062164 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:35:58.062185 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:35:58.062203 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:35:58.062222 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:35:58.062240 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:35:58.062259 | orchestrator | 2026-03-05 00:35:58.062277 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-05 00:35:58.062296 | orchestrator | Thursday 05 March 2026 00:35:41 +0000 (0:00:00.492) 0:06:45.475 ******** 2026-03-05 00:35:58.062316 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:58.062334 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:58.062353 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:58.062372 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:58.062390 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:58.062409 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:58.062420 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:58.062430 | orchestrator | 2026-03-05 00:35:58.062442 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-05 00:35:58.062461 | orchestrator | Thursday 05 March 2026 00:35:42 +0000 (0:00:01.846) 0:06:47.322 ******** 2026-03-05 00:35:58.062482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:35:58.062503 | orchestrator | 2026-03-05 00:35:58.062518 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-05 00:35:58.062529 | orchestrator | Thursday 05 March 2026 00:35:43 +0000 (0:00:00.797) 0:06:48.119 ******** 2026-03-05 00:35:58.062562 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:58.062574 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:58.062591 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:58.062609 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:58.062628 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:58.062647 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:58.062665 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:58.062678 | orchestrator | 2026-03-05 00:35:58.062689 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-05 00:35:58.062701 | orchestrator | Thursday 05 March 2026 00:35:44 +0000 (0:00:00.811) 0:06:48.930 ******** 2026-03-05 00:35:58.062711 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:58.062726 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:58.062744 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:58.062763 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:58.062781 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:58.062796 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:58.062807 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:58.062817 | orchestrator | 2026-03-05 00:35:58.062828 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-05 00:35:58.062839 | orchestrator | Thursday 05 March 2026 00:35:45 +0000 (0:00:00.934) 0:06:49.865 ******** 2026-03-05 00:35:58.062850 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:58.062860 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:58.062871 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:58.062881 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:58.062892 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:58.062902 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:58.062913 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:58.062923 | orchestrator | 2026-03-05 00:35:58.062940 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-05 00:35:58.062984 | orchestrator | Thursday 05 March 2026 00:35:47 +0000 (0:00:01.498) 0:06:51.364 ******** 2026-03-05 00:35:58.063000 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:35:58.063011 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:58.063028 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:58.063046 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:58.063065 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:58.063084 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:58.063102 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:58.063118 | orchestrator | 2026-03-05 00:35:58.063157 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-05 00:35:58.063169 | orchestrator | Thursday 05 March 2026 00:35:48 +0000 (0:00:01.371) 0:06:52.735 ******** 2026-03-05 00:35:58.063180 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:58.063190 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:58.063201 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:58.063211 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:58.063222 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:58.063238 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:58.063257 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:58.063274 | orchestrator | 2026-03-05 00:35:58.063289 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-05 00:35:58.063304 | orchestrator | Thursday 05 March 2026 00:35:49 +0000 (0:00:01.344) 0:06:54.079 ******** 2026-03-05 00:35:58.063320 | orchestrator | changed: [testbed-manager] 2026-03-05 00:35:58.063339 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:35:58.063357 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:35:58.063377 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:35:58.063395 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:35:58.063413 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:35:58.063432 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:35:58.063449 | orchestrator | 2026-03-05 00:35:58.063468 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-05 00:35:58.063498 | orchestrator | Thursday 05 March 2026 00:35:51 +0000 (0:00:01.404) 0:06:55.484 ******** 2026-03-05 00:35:58.063516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:35:58.063528 | orchestrator | 2026-03-05 00:35:58.063539 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-05 00:35:58.063550 | orchestrator | Thursday 05 March 2026 00:35:52 +0000 (0:00:00.939) 0:06:56.423 ******** 2026-03-05 00:35:58.063561 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:58.063572 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:58.063591 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:58.063610 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:58.063630 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:58.063648 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:58.063667 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:58.063680 | orchestrator | 2026-03-05 00:35:58.063692 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-05 00:35:58.063704 | orchestrator | Thursday 05 March 2026 00:35:53 +0000 (0:00:01.366) 0:06:57.789 ******** 2026-03-05 00:35:58.063722 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:58.063741 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:58.063759 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:58.063775 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:58.063789 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:58.063807 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:58.063845 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:58.063865 | orchestrator | 2026-03-05 00:35:58.063883 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-05 00:35:58.063903 | orchestrator | Thursday 05 March 2026 00:35:54 +0000 (0:00:01.127) 0:06:58.917 ******** 2026-03-05 00:35:58.063921 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:58.063939 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:58.063958 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:58.063977 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:58.063988 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:58.063999 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:58.064009 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:58.064020 | orchestrator | 2026-03-05 00:35:58.064031 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-05 00:35:58.064042 | orchestrator | Thursday 05 March 2026 00:35:55 +0000 (0:00:01.130) 0:07:00.047 ******** 2026-03-05 00:35:58.064052 | orchestrator | ok: [testbed-manager] 2026-03-05 00:35:58.064063 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:35:58.064073 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:35:58.064084 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:35:58.064094 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:35:58.064108 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:35:58.064276 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:35:58.064320 | orchestrator | 2026-03-05 00:35:58.064331 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-05 00:35:58.064341 | orchestrator | Thursday 05 March 2026 00:35:56 +0000 (0:00:01.264) 0:07:01.311 ******** 2026-03-05 00:35:58.064351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:35:58.064361 | orchestrator | 2026-03-05 00:35:58.064370 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:35:58.064380 | orchestrator | Thursday 05 March 2026 00:35:57 +0000 (0:00:00.802) 0:07:02.114 ******** 2026-03-05 00:35:58.064389 | orchestrator | 2026-03-05 00:35:58.064399 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:35:58.064408 | orchestrator | Thursday 05 March 2026 00:35:57 +0000 (0:00:00.037) 0:07:02.151 ******** 2026-03-05 00:35:58.064429 | orchestrator | 2026-03-05 00:35:58.064439 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:35:58.064449 | orchestrator | Thursday 05 March 2026 00:35:57 +0000 (0:00:00.037) 0:07:02.189 ******** 2026-03-05 00:35:58.064458 | orchestrator | 2026-03-05 00:35:58.064468 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:35:58.064478 | orchestrator | Thursday 05 March 2026 00:35:57 +0000 (0:00:00.044) 0:07:02.234 ******** 2026-03-05 00:35:58.064487 | orchestrator | 2026-03-05 00:35:58.064511 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:36:23.277356 | orchestrator | Thursday 05 March 2026 00:35:57 +0000 (0:00:00.037) 0:07:02.271 ******** 2026-03-05 00:36:23.277509 | orchestrator | 2026-03-05 00:36:23.277526 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:36:23.277539 | orchestrator | Thursday 05 March 2026 00:35:57 +0000 (0:00:00.037) 0:07:02.309 ******** 2026-03-05 00:36:23.277550 | orchestrator | 2026-03-05 00:36:23.277562 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-05 00:36:23.277573 | orchestrator | Thursday 05 March 2026 00:35:58 +0000 (0:00:00.042) 0:07:02.352 ******** 2026-03-05 00:36:23.277584 | orchestrator | 2026-03-05 00:36:23.277595 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-05 00:36:23.277606 | orchestrator | Thursday 05 March 2026 00:35:58 +0000 (0:00:00.038) 0:07:02.390 ******** 2026-03-05 00:36:23.277617 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:23.277630 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:23.277640 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:23.277652 | orchestrator | 2026-03-05 00:36:23.277662 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-05 00:36:23.277673 | orchestrator | Thursday 05 March 2026 00:35:59 +0000 (0:00:01.168) 0:07:03.559 ******** 2026-03-05 00:36:23.277684 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:23.277697 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:23.277708 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:23.277718 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:23.277729 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:23.277740 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:23.277751 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:23.277762 | orchestrator | 2026-03-05 00:36:23.277787 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-05 00:36:23.277798 | orchestrator | Thursday 05 March 2026 00:36:00 +0000 (0:00:01.558) 0:07:05.117 ******** 2026-03-05 00:36:23.277809 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:23.277820 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:23.277831 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:23.277842 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:23.277853 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:23.277871 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:23.277892 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:23.277912 | orchestrator | 2026-03-05 00:36:23.277945 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-05 00:36:23.277967 | orchestrator | Thursday 05 March 2026 00:36:01 +0000 (0:00:01.157) 0:07:06.274 ******** 2026-03-05 00:36:23.277989 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:36:23.278003 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:23.278082 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:23.278098 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:23.278147 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:23.278167 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:23.278186 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:23.278205 | orchestrator | 2026-03-05 00:36:23.278226 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-05 00:36:23.278245 | orchestrator | Thursday 05 March 2026 00:36:04 +0000 (0:00:02.305) 0:07:08.580 ******** 2026-03-05 00:36:23.278295 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:23.278308 | orchestrator | 2026-03-05 00:36:23.278337 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-05 00:36:23.278349 | orchestrator | Thursday 05 March 2026 00:36:04 +0000 (0:00:00.104) 0:07:08.685 ******** 2026-03-05 00:36:23.278360 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:23.278371 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:23.278381 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:23.278392 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:23.278403 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:23.278414 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:23.278424 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:23.278436 | orchestrator | 2026-03-05 00:36:23.278447 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-05 00:36:23.278459 | orchestrator | Thursday 05 March 2026 00:36:05 +0000 (0:00:00.972) 0:07:09.657 ******** 2026-03-05 00:36:23.278469 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:36:23.278480 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:23.278491 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:36:23.278501 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:36:23.278512 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:36:23.278523 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:36:23.278533 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:36:23.278544 | orchestrator | 2026-03-05 00:36:23.278555 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-05 00:36:23.278566 | orchestrator | Thursday 05 March 2026 00:36:05 +0000 (0:00:00.482) 0:07:10.139 ******** 2026-03-05 00:36:23.278578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:36:23.278592 | orchestrator | 2026-03-05 00:36:23.278603 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-05 00:36:23.278614 | orchestrator | Thursday 05 March 2026 00:36:06 +0000 (0:00:00.983) 0:07:11.123 ******** 2026-03-05 00:36:23.278624 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:23.278635 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:23.278646 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:23.278657 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:23.278668 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:23.278679 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:23.278689 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:23.278701 | orchestrator | 2026-03-05 00:36:23.278712 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-05 00:36:23.278723 | orchestrator | Thursday 05 March 2026 00:36:07 +0000 (0:00:00.919) 0:07:12.043 ******** 2026-03-05 00:36:23.278734 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-05 00:36:23.278745 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-05 00:36:23.278777 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-05 00:36:23.278789 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-05 00:36:23.278800 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-05 00:36:23.278811 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-05 00:36:23.278822 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-05 00:36:23.278833 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-05 00:36:23.278844 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-05 00:36:23.278855 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-05 00:36:23.278866 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-05 00:36:23.278877 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-05 00:36:23.278888 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-05 00:36:23.278907 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-05 00:36:23.278918 | orchestrator | 2026-03-05 00:36:23.278929 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-05 00:36:23.278940 | orchestrator | Thursday 05 March 2026 00:36:10 +0000 (0:00:02.387) 0:07:14.430 ******** 2026-03-05 00:36:23.278951 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:36:23.278962 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:23.278973 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:36:23.278984 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:36:23.278995 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:36:23.279006 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:36:23.279017 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:36:23.279028 | orchestrator | 2026-03-05 00:36:23.279039 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-05 00:36:23.279050 | orchestrator | Thursday 05 March 2026 00:36:10 +0000 (0:00:00.633) 0:07:15.064 ******** 2026-03-05 00:36:23.279063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:36:23.279077 | orchestrator | 2026-03-05 00:36:23.279088 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-05 00:36:23.279099 | orchestrator | Thursday 05 March 2026 00:36:11 +0000 (0:00:00.791) 0:07:15.855 ******** 2026-03-05 00:36:23.279135 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:23.279146 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:23.279156 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:23.279167 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:23.279178 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:23.279189 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:23.279200 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:23.279210 | orchestrator | 2026-03-05 00:36:23.279221 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-05 00:36:23.279232 | orchestrator | Thursday 05 March 2026 00:36:12 +0000 (0:00:00.828) 0:07:16.683 ******** 2026-03-05 00:36:23.279243 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:23.279259 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:23.279271 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:23.279282 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:23.279292 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:23.279303 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:23.279314 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:23.279325 | orchestrator | 2026-03-05 00:36:23.279336 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-05 00:36:23.279347 | orchestrator | Thursday 05 March 2026 00:36:13 +0000 (0:00:00.983) 0:07:17.667 ******** 2026-03-05 00:36:23.279358 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:36:23.279369 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:23.279379 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:36:23.279390 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:36:23.279401 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:36:23.279412 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:36:23.279423 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:36:23.279434 | orchestrator | 2026-03-05 00:36:23.279445 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-05 00:36:23.279456 | orchestrator | Thursday 05 March 2026 00:36:13 +0000 (0:00:00.459) 0:07:18.126 ******** 2026-03-05 00:36:23.279467 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:23.279478 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:23.279489 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:23.279499 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:23.279510 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:23.279521 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:23.279540 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:23.279551 | orchestrator | 2026-03-05 00:36:23.279562 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-05 00:36:23.279572 | orchestrator | Thursday 05 March 2026 00:36:15 +0000 (0:00:01.449) 0:07:19.576 ******** 2026-03-05 00:36:23.279583 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:36:23.279594 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:23.279605 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:36:23.279616 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:36:23.279627 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:36:23.279638 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:36:23.279649 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:36:23.279660 | orchestrator | 2026-03-05 00:36:23.279671 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-05 00:36:23.279682 | orchestrator | Thursday 05 March 2026 00:36:15 +0000 (0:00:00.471) 0:07:20.048 ******** 2026-03-05 00:36:23.279693 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:23.279704 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:23.279714 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:23.279725 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:23.279736 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:23.279747 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:23.279763 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:23.279782 | orchestrator | 2026-03-05 00:36:23.279809 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-05 00:36:54.758408 | orchestrator | Thursday 05 March 2026 00:36:23 +0000 (0:00:07.565) 0:07:27.614 ******** 2026-03-05 00:36:54.758524 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.758541 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:54.758554 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:54.758566 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:54.758577 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:54.758587 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:54.758598 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:54.758609 | orchestrator | 2026-03-05 00:36:54.758621 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-05 00:36:54.758641 | orchestrator | Thursday 05 March 2026 00:36:24 +0000 (0:00:01.378) 0:07:28.992 ******** 2026-03-05 00:36:54.758660 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.758677 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:54.758695 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:54.758714 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:54.758734 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:54.758753 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:54.758772 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:54.758790 | orchestrator | 2026-03-05 00:36:54.758808 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-05 00:36:54.758824 | orchestrator | Thursday 05 March 2026 00:36:26 +0000 (0:00:01.623) 0:07:30.615 ******** 2026-03-05 00:36:54.758843 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.758862 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:54.758881 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:54.758898 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:54.758917 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:54.758938 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:54.758957 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:54.758978 | orchestrator | 2026-03-05 00:36:54.758991 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-05 00:36:54.759004 | orchestrator | Thursday 05 March 2026 00:36:27 +0000 (0:00:01.636) 0:07:32.251 ******** 2026-03-05 00:36:54.759019 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.759040 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:54.759057 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:54.759109 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:54.759156 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:54.759168 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:54.759179 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:54.759190 | orchestrator | 2026-03-05 00:36:54.759208 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-05 00:36:54.759226 | orchestrator | Thursday 05 March 2026 00:36:28 +0000 (0:00:00.870) 0:07:33.122 ******** 2026-03-05 00:36:54.759245 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:36:54.759262 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:54.759282 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:36:54.759301 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:36:54.759319 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:36:54.759338 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:36:54.759358 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:36:54.759376 | orchestrator | 2026-03-05 00:36:54.759390 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-05 00:36:54.759401 | orchestrator | Thursday 05 March 2026 00:36:29 +0000 (0:00:01.038) 0:07:34.160 ******** 2026-03-05 00:36:54.759412 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:36:54.759423 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:54.759434 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:36:54.759445 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:36:54.759455 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:36:54.759466 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:36:54.759477 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:36:54.759487 | orchestrator | 2026-03-05 00:36:54.759499 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-05 00:36:54.759510 | orchestrator | Thursday 05 March 2026 00:36:30 +0000 (0:00:00.508) 0:07:34.669 ******** 2026-03-05 00:36:54.759521 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.759552 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:54.759563 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:54.759575 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:54.759594 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:54.759612 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:54.759631 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:54.759650 | orchestrator | 2026-03-05 00:36:54.759667 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-05 00:36:54.759684 | orchestrator | Thursday 05 March 2026 00:36:30 +0000 (0:00:00.489) 0:07:35.159 ******** 2026-03-05 00:36:54.759704 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.759722 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:54.759733 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:54.759744 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:54.759755 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:54.759766 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:54.759776 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:54.759878 | orchestrator | 2026-03-05 00:36:54.759900 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-05 00:36:54.759919 | orchestrator | Thursday 05 March 2026 00:36:31 +0000 (0:00:00.504) 0:07:35.663 ******** 2026-03-05 00:36:54.759938 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.759957 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:54.759975 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:54.760045 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:54.760123 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:54.760143 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:54.760163 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:54.760181 | orchestrator | 2026-03-05 00:36:54.760201 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-05 00:36:54.760221 | orchestrator | Thursday 05 March 2026 00:36:32 +0000 (0:00:00.704) 0:07:36.368 ******** 2026-03-05 00:36:54.760240 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.760258 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:54.760278 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:54.760317 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:54.760337 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:54.760355 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:54.760375 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:54.760394 | orchestrator | 2026-03-05 00:36:54.760412 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-05 00:36:54.760458 | orchestrator | Thursday 05 March 2026 00:36:37 +0000 (0:00:05.655) 0:07:42.023 ******** 2026-03-05 00:36:54.760479 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:36:54.760497 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:36:54.760516 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:36:54.760534 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:36:54.760553 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:36:54.760571 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:36:54.760588 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:36:54.760606 | orchestrator | 2026-03-05 00:36:54.760623 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-05 00:36:54.760640 | orchestrator | Thursday 05 March 2026 00:36:38 +0000 (0:00:00.511) 0:07:42.535 ******** 2026-03-05 00:36:54.760686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:36:54.760710 | orchestrator | 2026-03-05 00:36:54.760727 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-05 00:36:54.760746 | orchestrator | Thursday 05 March 2026 00:36:39 +0000 (0:00:00.942) 0:07:43.477 ******** 2026-03-05 00:36:54.760766 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.760785 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:54.760804 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:54.760823 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:54.760835 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:54.760845 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:54.760856 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:54.760867 | orchestrator | 2026-03-05 00:36:54.760878 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-05 00:36:54.760889 | orchestrator | Thursday 05 March 2026 00:36:41 +0000 (0:00:01.887) 0:07:45.365 ******** 2026-03-05 00:36:54.760900 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.760911 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:54.760922 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:54.760932 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:54.760943 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:54.760953 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:54.760964 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:54.760975 | orchestrator | 2026-03-05 00:36:54.760986 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-05 00:36:54.760997 | orchestrator | Thursday 05 March 2026 00:36:42 +0000 (0:00:01.103) 0:07:46.468 ******** 2026-03-05 00:36:54.761008 | orchestrator | ok: [testbed-manager] 2026-03-05 00:36:54.761019 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:36:54.761029 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:36:54.761040 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:36:54.761050 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:36:54.761138 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:36:54.761153 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:36:54.761164 | orchestrator | 2026-03-05 00:36:54.761174 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-05 00:36:54.761185 | orchestrator | Thursday 05 March 2026 00:36:42 +0000 (0:00:00.819) 0:07:47.288 ******** 2026-03-05 00:36:54.761209 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:36:54.761223 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:36:54.761246 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:36:54.761256 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:36:54.761266 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:36:54.761276 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:36:54.761285 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-05 00:36:54.761295 | orchestrator | 2026-03-05 00:36:54.761304 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-05 00:36:54.761317 | orchestrator | Thursday 05 March 2026 00:36:44 +0000 (0:00:01.821) 0:07:49.109 ******** 2026-03-05 00:36:54.761335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:36:54.761352 | orchestrator | 2026-03-05 00:36:54.761368 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-05 00:36:54.761384 | orchestrator | Thursday 05 March 2026 00:36:45 +0000 (0:00:00.770) 0:07:49.880 ******** 2026-03-05 00:36:54.761401 | orchestrator | changed: [testbed-manager] 2026-03-05 00:36:54.761418 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:36:54.761436 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:36:54.761453 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:36:54.761469 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:36:54.761485 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:36:54.761495 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:36:54.761505 | orchestrator | 2026-03-05 00:36:54.761514 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-05 00:36:54.761537 | orchestrator | Thursday 05 March 2026 00:36:54 +0000 (0:00:09.214) 0:07:59.094 ******** 2026-03-05 00:37:26.191961 | orchestrator | ok: [testbed-manager] 2026-03-05 00:37:26.192146 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:37:26.192164 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:37:26.192177 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:37:26.192188 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:37:26.192200 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:37:26.192210 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:37:26.192221 | orchestrator | 2026-03-05 00:37:26.192234 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-05 00:37:26.192247 | orchestrator | Thursday 05 March 2026 00:36:56 +0000 (0:00:01.910) 0:08:01.005 ******** 2026-03-05 00:37:26.192257 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:37:26.192269 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:37:26.192279 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:37:26.192290 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:37:26.192301 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:37:26.192312 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:37:26.192323 | orchestrator | 2026-03-05 00:37:26.192334 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-05 00:37:26.192344 | orchestrator | Thursday 05 March 2026 00:36:57 +0000 (0:00:01.292) 0:08:02.297 ******** 2026-03-05 00:37:26.192355 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:26.192368 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:26.192379 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:26.192390 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:26.192401 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:26.192437 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:26.192449 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:26.192460 | orchestrator | 2026-03-05 00:37:26.192471 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-05 00:37:26.192483 | orchestrator | 2026-03-05 00:37:26.192497 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-05 00:37:26.192509 | orchestrator | Thursday 05 March 2026 00:36:59 +0000 (0:00:01.276) 0:08:03.574 ******** 2026-03-05 00:37:26.192522 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:37:26.192536 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:37:26.192548 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:37:26.192561 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:37:26.192575 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:37:26.192587 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:37:26.192599 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:37:26.192612 | orchestrator | 2026-03-05 00:37:26.192625 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-05 00:37:26.192638 | orchestrator | 2026-03-05 00:37:26.192651 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-05 00:37:26.192664 | orchestrator | Thursday 05 March 2026 00:36:59 +0000 (0:00:00.680) 0:08:04.255 ******** 2026-03-05 00:37:26.192677 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:26.192689 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:26.192702 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:26.192714 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:26.192725 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:26.192736 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:26.192747 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:26.192757 | orchestrator | 2026-03-05 00:37:26.192768 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-05 00:37:26.192795 | orchestrator | Thursday 05 March 2026 00:37:01 +0000 (0:00:01.308) 0:08:05.563 ******** 2026-03-05 00:37:26.192806 | orchestrator | ok: [testbed-manager] 2026-03-05 00:37:26.192817 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:37:26.192828 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:37:26.192838 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:37:26.192849 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:37:26.192860 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:37:26.192870 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:37:26.192881 | orchestrator | 2026-03-05 00:37:26.192892 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-05 00:37:26.192902 | orchestrator | Thursday 05 March 2026 00:37:02 +0000 (0:00:01.481) 0:08:07.045 ******** 2026-03-05 00:37:26.192913 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:37:26.192924 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:37:26.192934 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:37:26.192945 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:37:26.192956 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:37:26.192966 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:37:26.192977 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:37:26.192988 | orchestrator | 2026-03-05 00:37:26.192998 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-05 00:37:26.193010 | orchestrator | Thursday 05 March 2026 00:37:03 +0000 (0:00:00.513) 0:08:07.558 ******** 2026-03-05 00:37:26.193040 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:37:26.193054 | orchestrator | 2026-03-05 00:37:26.193065 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-05 00:37:26.193076 | orchestrator | Thursday 05 March 2026 00:37:04 +0000 (0:00:01.060) 0:08:08.618 ******** 2026-03-05 00:37:26.193088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:37:26.193110 | orchestrator | 2026-03-05 00:37:26.193121 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-05 00:37:26.193132 | orchestrator | Thursday 05 March 2026 00:37:05 +0000 (0:00:00.810) 0:08:09.429 ******** 2026-03-05 00:37:26.193142 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:26.193153 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:26.193164 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:26.193174 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:26.193185 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:26.193195 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:26.193206 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:26.193217 | orchestrator | 2026-03-05 00:37:26.193228 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-05 00:37:26.193256 | orchestrator | Thursday 05 March 2026 00:37:14 +0000 (0:00:09.137) 0:08:18.567 ******** 2026-03-05 00:37:26.193268 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:26.193279 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:26.193289 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:26.193300 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:26.193311 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:26.193321 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:26.193332 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:26.193343 | orchestrator | 2026-03-05 00:37:26.193354 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-05 00:37:26.193364 | orchestrator | Thursday 05 March 2026 00:37:15 +0000 (0:00:01.036) 0:08:19.603 ******** 2026-03-05 00:37:26.193375 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:26.193386 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:26.193396 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:26.193407 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:26.193417 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:26.193428 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:26.193438 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:26.193449 | orchestrator | 2026-03-05 00:37:26.193460 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-05 00:37:26.193471 | orchestrator | Thursday 05 March 2026 00:37:16 +0000 (0:00:01.391) 0:08:20.995 ******** 2026-03-05 00:37:26.193482 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:26.193492 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:26.193503 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:26.193513 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:26.193524 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:26.193540 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:26.193557 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:26.193568 | orchestrator | 2026-03-05 00:37:26.193579 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-05 00:37:26.193590 | orchestrator | Thursday 05 March 2026 00:37:18 +0000 (0:00:02.090) 0:08:23.085 ******** 2026-03-05 00:37:26.193600 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:26.193611 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:26.193622 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:26.193632 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:26.193643 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:26.193653 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:26.193664 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:26.193674 | orchestrator | 2026-03-05 00:37:26.193685 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-05 00:37:26.193696 | orchestrator | Thursday 05 March 2026 00:37:20 +0000 (0:00:01.322) 0:08:24.408 ******** 2026-03-05 00:37:26.193706 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:26.193717 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:26.193727 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:26.193753 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:26.193765 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:26.193775 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:26.193786 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:26.193797 | orchestrator | 2026-03-05 00:37:26.193807 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-05 00:37:26.193818 | orchestrator | 2026-03-05 00:37:26.193835 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-05 00:37:26.193846 | orchestrator | Thursday 05 March 2026 00:37:21 +0000 (0:00:01.309) 0:08:25.718 ******** 2026-03-05 00:37:26.193857 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:37:26.193870 | orchestrator | 2026-03-05 00:37:26.193888 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-05 00:37:26.193899 | orchestrator | Thursday 05 March 2026 00:37:22 +0000 (0:00:00.800) 0:08:26.519 ******** 2026-03-05 00:37:26.193910 | orchestrator | ok: [testbed-manager] 2026-03-05 00:37:26.193921 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:37:26.193932 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:37:26.193943 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:37:26.193953 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:37:26.193964 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:37:26.193975 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:37:26.193985 | orchestrator | 2026-03-05 00:37:26.193996 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-05 00:37:26.194007 | orchestrator | Thursday 05 March 2026 00:37:23 +0000 (0:00:01.030) 0:08:27.549 ******** 2026-03-05 00:37:26.194101 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:26.194115 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:26.194130 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:26.194148 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:26.194160 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:26.194170 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:26.194181 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:26.194192 | orchestrator | 2026-03-05 00:37:26.194203 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-05 00:37:26.194214 | orchestrator | Thursday 05 March 2026 00:37:24 +0000 (0:00:01.148) 0:08:28.697 ******** 2026-03-05 00:37:26.194225 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:37:26.194236 | orchestrator | 2026-03-05 00:37:26.194247 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-05 00:37:26.194257 | orchestrator | Thursday 05 March 2026 00:37:25 +0000 (0:00:00.808) 0:08:29.506 ******** 2026-03-05 00:37:26.194268 | orchestrator | ok: [testbed-manager] 2026-03-05 00:37:26.194279 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:37:26.194290 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:37:26.194307 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:37:26.194320 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:37:26.194331 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:37:26.194342 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:37:26.194353 | orchestrator | 2026-03-05 00:37:26.194364 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-05 00:37:26.194384 | orchestrator | Thursday 05 March 2026 00:37:26 +0000 (0:00:01.021) 0:08:30.528 ******** 2026-03-05 00:37:27.652971 | orchestrator | changed: [testbed-manager] 2026-03-05 00:37:27.653158 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:37:27.653178 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:37:27.653190 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:37:27.653201 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:37:27.653212 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:37:27.653223 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:37:27.653234 | orchestrator | 2026-03-05 00:37:27.653320 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:37:27.653337 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-05 00:37:27.653350 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-05 00:37:27.653361 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-05 00:37:27.653372 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-05 00:37:27.653382 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-05 00:37:27.653393 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-05 00:37:27.653404 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-05 00:37:27.653414 | orchestrator | 2026-03-05 00:37:27.653425 | orchestrator | 2026-03-05 00:37:27.653436 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:37:27.653453 | orchestrator | Thursday 05 March 2026 00:37:27 +0000 (0:00:01.052) 0:08:31.580 ******** 2026-03-05 00:37:27.653472 | orchestrator | =============================================================================== 2026-03-05 00:37:27.653483 | orchestrator | osism.commons.packages : Install required packages --------------------- 90.02s 2026-03-05 00:37:27.653494 | orchestrator | osism.commons.packages : Download required packages -------------------- 47.96s 2026-03-05 00:37:27.653504 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.54s 2026-03-05 00:37:27.653516 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.77s 2026-03-05 00:37:27.653529 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.39s 2026-03-05 00:37:27.653555 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.97s 2026-03-05 00:37:27.653569 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.05s 2026-03-05 00:37:27.653582 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.76s 2026-03-05 00:37:27.653595 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.32s 2026-03-05 00:37:27.653607 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.21s 2026-03-05 00:37:27.653620 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.14s 2026-03-05 00:37:27.653632 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.85s 2026-03-05 00:37:27.653645 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.55s 2026-03-05 00:37:27.653659 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.42s 2026-03-05 00:37:27.653671 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.84s 2026-03-05 00:37:27.653683 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.57s 2026-03-05 00:37:27.653696 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.80s 2026-03-05 00:37:27.653708 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.75s 2026-03-05 00:37:27.653721 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.66s 2026-03-05 00:37:27.653733 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.49s 2026-03-05 00:37:27.929782 | orchestrator | + osism apply fail2ban 2026-03-05 00:37:40.429491 | orchestrator | 2026-03-05 00:37:40 | INFO  | Task 555fdbc8-3a44-46ff-a5c7-758b977719e8 (fail2ban) was prepared for execution. 2026-03-05 00:37:40.429598 | orchestrator | 2026-03-05 00:37:40 | INFO  | It takes a moment until task 555fdbc8-3a44-46ff-a5c7-758b977719e8 (fail2ban) has been started and output is visible here. 2026-03-05 00:38:02.045787 | orchestrator | 2026-03-05 00:38:02.045905 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-05 00:38:02.045922 | orchestrator | 2026-03-05 00:38:02.045935 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-05 00:38:02.045947 | orchestrator | Thursday 05 March 2026 00:37:44 +0000 (0:00:00.261) 0:00:00.261 ******** 2026-03-05 00:38:02.045961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:38:02.045975 | orchestrator | 2026-03-05 00:38:02.045987 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-05 00:38:02.045999 | orchestrator | Thursday 05 March 2026 00:37:45 +0000 (0:00:01.087) 0:00:01.348 ******** 2026-03-05 00:38:02.046140 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:38:02.046160 | orchestrator | changed: [testbed-manager] 2026-03-05 00:38:02.046171 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:38:02.046182 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:38:02.046193 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:38:02.046204 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:38:02.046215 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:38:02.046226 | orchestrator | 2026-03-05 00:38:02.046238 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-05 00:38:02.046249 | orchestrator | Thursday 05 March 2026 00:37:57 +0000 (0:00:11.214) 0:00:12.562 ******** 2026-03-05 00:38:02.046260 | orchestrator | changed: [testbed-manager] 2026-03-05 00:38:02.046272 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:38:02.046283 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:38:02.046294 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:38:02.046305 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:38:02.046316 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:38:02.046327 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:38:02.046340 | orchestrator | 2026-03-05 00:38:02.046353 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-05 00:38:02.046368 | orchestrator | Thursday 05 March 2026 00:37:58 +0000 (0:00:01.454) 0:00:14.017 ******** 2026-03-05 00:38:02.046381 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:02.046395 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:02.046408 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:02.046422 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:02.046435 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:02.046447 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:02.046460 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:02.046473 | orchestrator | 2026-03-05 00:38:02.046486 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-05 00:38:02.046499 | orchestrator | Thursday 05 March 2026 00:37:59 +0000 (0:00:01.419) 0:00:15.437 ******** 2026-03-05 00:38:02.046513 | orchestrator | changed: [testbed-manager] 2026-03-05 00:38:02.046527 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:38:02.046544 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:38:02.046563 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:38:02.046581 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:38:02.046599 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:38:02.046617 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:38:02.046636 | orchestrator | 2026-03-05 00:38:02.046654 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:38:02.046676 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:38:02.046733 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:38:02.046746 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:38:02.046757 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:38:02.046768 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:38:02.046779 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:38:02.046789 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:38:02.046800 | orchestrator | 2026-03-05 00:38:02.046811 | orchestrator | 2026-03-05 00:38:02.046822 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:38:02.046833 | orchestrator | Thursday 05 March 2026 00:38:01 +0000 (0:00:01.688) 0:00:17.125 ******** 2026-03-05 00:38:02.046844 | orchestrator | =============================================================================== 2026-03-05 00:38:02.046854 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.21s 2026-03-05 00:38:02.046865 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.69s 2026-03-05 00:38:02.046876 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.45s 2026-03-05 00:38:02.046886 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.42s 2026-03-05 00:38:02.046897 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.09s 2026-03-05 00:38:02.342619 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-05 00:38:02.342723 | orchestrator | + osism apply network 2026-03-05 00:38:14.298306 | orchestrator | 2026-03-05 00:38:14 | INFO  | Task ab86bcba-2e65-4bee-a914-67741ed7d534 (network) was prepared for execution. 2026-03-05 00:38:14.298431 | orchestrator | 2026-03-05 00:38:14 | INFO  | It takes a moment until task ab86bcba-2e65-4bee-a914-67741ed7d534 (network) has been started and output is visible here. 2026-03-05 00:38:41.483292 | orchestrator | 2026-03-05 00:38:41.483428 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-05 00:38:41.483457 | orchestrator | 2026-03-05 00:38:41.483476 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-05 00:38:41.483496 | orchestrator | Thursday 05 March 2026 00:38:18 +0000 (0:00:00.224) 0:00:00.224 ******** 2026-03-05 00:38:41.483516 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:41.483536 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:41.483556 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:41.483575 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:41.483593 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:41.483613 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:41.483632 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:41.483651 | orchestrator | 2026-03-05 00:38:41.483671 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-05 00:38:41.483689 | orchestrator | Thursday 05 March 2026 00:38:18 +0000 (0:00:00.536) 0:00:00.761 ******** 2026-03-05 00:38:41.483711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:38:41.483733 | orchestrator | 2026-03-05 00:38:41.483752 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-05 00:38:41.483770 | orchestrator | Thursday 05 March 2026 00:38:19 +0000 (0:00:00.988) 0:00:01.750 ******** 2026-03-05 00:38:41.483819 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:41.483842 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:41.483861 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:41.483879 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:41.483899 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:41.483919 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:41.483939 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:41.483960 | orchestrator | 2026-03-05 00:38:41.483982 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-05 00:38:41.484051 | orchestrator | Thursday 05 March 2026 00:38:21 +0000 (0:00:01.920) 0:00:03.670 ******** 2026-03-05 00:38:41.484073 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:41.484092 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:41.484109 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:41.484129 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:41.484150 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:41.484167 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:41.484187 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:41.484206 | orchestrator | 2026-03-05 00:38:41.484225 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-05 00:38:41.484243 | orchestrator | Thursday 05 March 2026 00:38:23 +0000 (0:00:01.611) 0:00:05.282 ******** 2026-03-05 00:38:41.484261 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-05 00:38:41.484279 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-05 00:38:41.484298 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-05 00:38:41.484316 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-05 00:38:41.484335 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-05 00:38:41.484354 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-05 00:38:41.484373 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-05 00:38:41.484392 | orchestrator | 2026-03-05 00:38:41.484430 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-05 00:38:41.484451 | orchestrator | Thursday 05 March 2026 00:38:24 +0000 (0:00:00.891) 0:00:06.174 ******** 2026-03-05 00:38:41.484474 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:38:41.484492 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-05 00:38:41.484508 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 00:38:41.484525 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-05 00:38:41.484540 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:38:41.484556 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 00:38:41.484574 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 00:38:41.484590 | orchestrator | 2026-03-05 00:38:41.484607 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-05 00:38:41.484624 | orchestrator | Thursday 05 March 2026 00:38:27 +0000 (0:00:02.998) 0:00:09.172 ******** 2026-03-05 00:38:41.484642 | orchestrator | changed: [testbed-manager] 2026-03-05 00:38:41.484658 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:38:41.484674 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:38:41.484691 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:38:41.484707 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:38:41.484724 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:38:41.484740 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:38:41.484756 | orchestrator | 2026-03-05 00:38:41.484773 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-05 00:38:41.484789 | orchestrator | Thursday 05 March 2026 00:38:28 +0000 (0:00:01.628) 0:00:10.800 ******** 2026-03-05 00:38:41.484805 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:38:41.484822 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:38:41.484838 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-05 00:38:41.484853 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-05 00:38:41.484870 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 00:38:41.484899 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 00:38:41.484916 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 00:38:41.484932 | orchestrator | 2026-03-05 00:38:41.484948 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-05 00:38:41.484965 | orchestrator | Thursday 05 March 2026 00:38:30 +0000 (0:00:01.641) 0:00:12.442 ******** 2026-03-05 00:38:41.484983 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:41.484999 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:41.485043 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:41.485060 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:41.485078 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:41.485097 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:41.485115 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:41.485133 | orchestrator | 2026-03-05 00:38:41.485152 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-05 00:38:41.485191 | orchestrator | Thursday 05 March 2026 00:38:31 +0000 (0:00:01.111) 0:00:13.554 ******** 2026-03-05 00:38:41.485212 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:38:41.485228 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:38:41.485246 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:38:41.485263 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:38:41.485280 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:38:41.485297 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:38:41.485314 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:38:41.485331 | orchestrator | 2026-03-05 00:38:41.485349 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-05 00:38:41.485366 | orchestrator | Thursday 05 March 2026 00:38:32 +0000 (0:00:00.705) 0:00:14.260 ******** 2026-03-05 00:38:41.485383 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:41.485402 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:41.485419 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:41.485436 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:41.485452 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:41.485470 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:41.485487 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:41.485505 | orchestrator | 2026-03-05 00:38:41.485522 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-05 00:38:41.485540 | orchestrator | Thursday 05 March 2026 00:38:34 +0000 (0:00:02.312) 0:00:16.572 ******** 2026-03-05 00:38:41.485557 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:38:41.485574 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:38:41.485592 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:38:41.485609 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:38:41.485626 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:38:41.485643 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:38:41.485661 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-05 00:38:41.485679 | orchestrator | 2026-03-05 00:38:41.485697 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-05 00:38:41.485713 | orchestrator | Thursday 05 March 2026 00:38:35 +0000 (0:00:00.913) 0:00:17.485 ******** 2026-03-05 00:38:41.485732 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:41.485749 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:38:41.485766 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:38:41.485782 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:38:41.485800 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:38:41.485817 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:38:41.485835 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:38:41.485852 | orchestrator | 2026-03-05 00:38:41.485868 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-05 00:38:41.485878 | orchestrator | Thursday 05 March 2026 00:38:37 +0000 (0:00:01.673) 0:00:19.159 ******** 2026-03-05 00:38:41.485889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:38:41.485911 | orchestrator | 2026-03-05 00:38:41.485921 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-05 00:38:41.485931 | orchestrator | Thursday 05 March 2026 00:38:38 +0000 (0:00:01.289) 0:00:20.449 ******** 2026-03-05 00:38:41.485940 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:41.485950 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:41.485959 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:41.485969 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:41.485978 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:41.485994 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:41.486070 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:41.486083 | orchestrator | 2026-03-05 00:38:41.486094 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-05 00:38:41.486103 | orchestrator | Thursday 05 March 2026 00:38:39 +0000 (0:00:01.096) 0:00:21.546 ******** 2026-03-05 00:38:41.486113 | orchestrator | ok: [testbed-manager] 2026-03-05 00:38:41.486123 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:38:41.486132 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:38:41.486142 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:38:41.486151 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:38:41.486161 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:38:41.486170 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:38:41.486180 | orchestrator | 2026-03-05 00:38:41.486190 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-05 00:38:41.486199 | orchestrator | Thursday 05 March 2026 00:38:40 +0000 (0:00:00.637) 0:00:22.183 ******** 2026-03-05 00:38:41.486209 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:38:41.486219 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:38:41.486228 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:38:41.486238 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:38:41.486248 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:38:41.486257 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:38:41.486267 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:38:41.486276 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:38:41.486286 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:38:41.486295 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:38:41.486305 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:38:41.486315 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:38:41.486324 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-05 00:38:41.486334 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-05 00:38:41.486344 | orchestrator | 2026-03-05 00:38:41.486364 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-05 00:38:57.418820 | orchestrator | Thursday 05 March 2026 00:38:41 +0000 (0:00:01.227) 0:00:23.410 ******** 2026-03-05 00:38:57.418930 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:38:57.418948 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:38:57.418961 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:38:57.418973 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:38:57.418984 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:38:57.418995 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:38:57.419045 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:38:57.419058 | orchestrator | 2026-03-05 00:38:57.419070 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-05 00:38:57.419106 | orchestrator | Thursday 05 March 2026 00:38:42 +0000 (0:00:00.626) 0:00:24.037 ******** 2026-03-05 00:38:57.419120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-manager, testbed-node-3, testbed-node-1, testbed-node-0, testbed-node-4, testbed-node-5 2026-03-05 00:38:57.419134 | orchestrator | 2026-03-05 00:38:57.419146 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-05 00:38:57.419157 | orchestrator | Thursday 05 March 2026 00:38:46 +0000 (0:00:04.672) 0:00:28.710 ******** 2026-03-05 00:38:57.419169 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419195 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419256 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419447 | orchestrator | 2026-03-05 00:38:57.419458 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-05 00:38:57.419471 | orchestrator | Thursday 05 March 2026 00:38:52 +0000 (0:00:05.427) 0:00:34.137 ******** 2026-03-05 00:38:57.419482 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419543 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-05 00:38:57.419577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:38:57.419637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:39:03.372474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-05 00:39:03.372581 | orchestrator | 2026-03-05 00:39:03.372597 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-05 00:39:03.372610 | orchestrator | Thursday 05 March 2026 00:38:57 +0000 (0:00:05.210) 0:00:39.347 ******** 2026-03-05 00:39:03.372623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:39:03.372635 | orchestrator | 2026-03-05 00:39:03.372646 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-05 00:39:03.372657 | orchestrator | Thursday 05 March 2026 00:38:58 +0000 (0:00:01.210) 0:00:40.558 ******** 2026-03-05 00:39:03.372668 | orchestrator | ok: [testbed-manager] 2026-03-05 00:39:03.372681 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:39:03.372692 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:39:03.372702 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:39:03.372713 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:39:03.372724 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:39:03.372735 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:39:03.372746 | orchestrator | 2026-03-05 00:39:03.372757 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-05 00:39:03.372768 | orchestrator | Thursday 05 March 2026 00:38:59 +0000 (0:00:01.158) 0:00:41.717 ******** 2026-03-05 00:39:03.372779 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:39:03.372791 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:39:03.372802 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:39:03.372813 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:39:03.372824 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:39:03.372835 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:39:03.372846 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:39:03.372857 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:39:03.372867 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:39:03.372879 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:39:03.372890 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:39:03.372918 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:39:03.372930 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:39:03.372940 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:39:03.372951 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:39:03.372984 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:39:03.372996 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:39:03.373050 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:39:03.373072 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:39:03.373091 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:39:03.373112 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:39:03.373126 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:39:03.373139 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:39:03.373151 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:39:03.373164 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:39:03.373178 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:39:03.373191 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:39:03.373205 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:39:03.373218 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:39:03.373229 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:39:03.373239 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-05 00:39:03.373250 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-05 00:39:03.373261 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-05 00:39:03.373271 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-05 00:39:03.373282 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:39:03.373293 | orchestrator | 2026-03-05 00:39:03.373304 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-05 00:39:03.373333 | orchestrator | Thursday 05 March 2026 00:39:01 +0000 (0:00:01.897) 0:00:43.614 ******** 2026-03-05 00:39:03.373345 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:39:03.373356 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:39:03.373367 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:39:03.373377 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:39:03.373388 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:39:03.373399 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:39:03.373409 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:39:03.373420 | orchestrator | 2026-03-05 00:39:03.373431 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-05 00:39:03.373442 | orchestrator | Thursday 05 March 2026 00:39:02 +0000 (0:00:00.629) 0:00:44.244 ******** 2026-03-05 00:39:03.373452 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:39:03.373463 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:39:03.373474 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:39:03.373484 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:39:03.373496 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:39:03.373506 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:39:03.373517 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:39:03.373527 | orchestrator | 2026-03-05 00:39:03.373538 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:39:03.373551 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 00:39:03.373563 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 00:39:03.373584 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 00:39:03.373595 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 00:39:03.373606 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 00:39:03.373617 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 00:39:03.373628 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 00:39:03.373638 | orchestrator | 2026-03-05 00:39:03.373649 | orchestrator | 2026-03-05 00:39:03.373660 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:39:03.373671 | orchestrator | Thursday 05 March 2026 00:39:02 +0000 (0:00:00.690) 0:00:44.934 ******** 2026-03-05 00:39:03.373682 | orchestrator | =============================================================================== 2026-03-05 00:39:03.373698 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.43s 2026-03-05 00:39:03.373710 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.21s 2026-03-05 00:39:03.373721 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.67s 2026-03-05 00:39:03.373731 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.00s 2026-03-05 00:39:03.373742 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.31s 2026-03-05 00:39:03.373753 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.92s 2026-03-05 00:39:03.373764 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.90s 2026-03-05 00:39:03.373774 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2026-03-05 00:39:03.373785 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.64s 2026-03-05 00:39:03.373796 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2026-03-05 00:39:03.373806 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.61s 2026-03-05 00:39:03.373817 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2026-03-05 00:39:03.373828 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.23s 2026-03-05 00:39:03.373838 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.21s 2026-03-05 00:39:03.373849 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2026-03-05 00:39:03.373860 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2026-03-05 00:39:03.373870 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2026-03-05 00:39:03.373881 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 0.99s 2026-03-05 00:39:03.373892 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.91s 2026-03-05 00:39:03.373902 | orchestrator | osism.commons.network : Create required directories --------------------- 0.89s 2026-03-05 00:39:03.642509 | orchestrator | + osism apply wireguard 2026-03-05 00:39:15.578187 | orchestrator | 2026-03-05 00:39:15 | INFO  | Task 6b9f2568-6e39-45d4-a47e-2bfaeb53f3b9 (wireguard) was prepared for execution. 2026-03-05 00:39:15.578283 | orchestrator | 2026-03-05 00:39:15 | INFO  | It takes a moment until task 6b9f2568-6e39-45d4-a47e-2bfaeb53f3b9 (wireguard) has been started and output is visible here. 2026-03-05 00:39:32.425054 | orchestrator | 2026-03-05 00:39:32.425161 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-05 00:39:32.425205 | orchestrator | 2026-03-05 00:39:32.425218 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-05 00:39:32.425229 | orchestrator | Thursday 05 March 2026 00:39:19 +0000 (0:00:00.160) 0:00:00.160 ******** 2026-03-05 00:39:32.425240 | orchestrator | ok: [testbed-manager] 2026-03-05 00:39:32.425252 | orchestrator | 2026-03-05 00:39:32.425263 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-05 00:39:32.425275 | orchestrator | Thursday 05 March 2026 00:39:20 +0000 (0:00:01.122) 0:00:01.283 ******** 2026-03-05 00:39:32.425286 | orchestrator | changed: [testbed-manager] 2026-03-05 00:39:32.425297 | orchestrator | 2026-03-05 00:39:32.425313 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-05 00:39:32.425325 | orchestrator | Thursday 05 March 2026 00:39:25 +0000 (0:00:05.017) 0:00:06.301 ******** 2026-03-05 00:39:32.425336 | orchestrator | changed: [testbed-manager] 2026-03-05 00:39:32.425347 | orchestrator | 2026-03-05 00:39:32.425357 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-05 00:39:32.425368 | orchestrator | Thursday 05 March 2026 00:39:25 +0000 (0:00:00.503) 0:00:06.804 ******** 2026-03-05 00:39:32.425379 | orchestrator | changed: [testbed-manager] 2026-03-05 00:39:32.425390 | orchestrator | 2026-03-05 00:39:32.425400 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-05 00:39:32.425411 | orchestrator | Thursday 05 March 2026 00:39:26 +0000 (0:00:00.379) 0:00:07.183 ******** 2026-03-05 00:39:32.425422 | orchestrator | ok: [testbed-manager] 2026-03-05 00:39:32.425433 | orchestrator | 2026-03-05 00:39:32.425443 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-05 00:39:32.425454 | orchestrator | Thursday 05 March 2026 00:39:26 +0000 (0:00:00.622) 0:00:07.806 ******** 2026-03-05 00:39:32.425465 | orchestrator | ok: [testbed-manager] 2026-03-05 00:39:32.425476 | orchestrator | 2026-03-05 00:39:32.425487 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-05 00:39:32.425498 | orchestrator | Thursday 05 March 2026 00:39:27 +0000 (0:00:00.404) 0:00:08.210 ******** 2026-03-05 00:39:32.425508 | orchestrator | ok: [testbed-manager] 2026-03-05 00:39:32.425519 | orchestrator | 2026-03-05 00:39:32.425530 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-05 00:39:32.425543 | orchestrator | Thursday 05 March 2026 00:39:27 +0000 (0:00:00.403) 0:00:08.614 ******** 2026-03-05 00:39:32.425556 | orchestrator | changed: [testbed-manager] 2026-03-05 00:39:32.425568 | orchestrator | 2026-03-05 00:39:32.425581 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-05 00:39:32.425594 | orchestrator | Thursday 05 March 2026 00:39:28 +0000 (0:00:01.110) 0:00:09.725 ******** 2026-03-05 00:39:32.425606 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-05 00:39:32.425617 | orchestrator | changed: [testbed-manager] 2026-03-05 00:39:32.425628 | orchestrator | 2026-03-05 00:39:32.425639 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-05 00:39:32.425650 | orchestrator | Thursday 05 March 2026 00:39:29 +0000 (0:00:00.866) 0:00:10.592 ******** 2026-03-05 00:39:32.425661 | orchestrator | changed: [testbed-manager] 2026-03-05 00:39:32.425671 | orchestrator | 2026-03-05 00:39:32.425683 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-05 00:39:32.425694 | orchestrator | Thursday 05 March 2026 00:39:31 +0000 (0:00:01.666) 0:00:12.258 ******** 2026-03-05 00:39:32.425705 | orchestrator | changed: [testbed-manager] 2026-03-05 00:39:32.425716 | orchestrator | 2026-03-05 00:39:32.425727 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:39:32.425738 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:39:32.425750 | orchestrator | 2026-03-05 00:39:32.425761 | orchestrator | 2026-03-05 00:39:32.425772 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:39:32.425791 | orchestrator | Thursday 05 March 2026 00:39:32 +0000 (0:00:00.807) 0:00:13.066 ******** 2026-03-05 00:39:32.425802 | orchestrator | =============================================================================== 2026-03-05 00:39:32.425813 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.02s 2026-03-05 00:39:32.425824 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.67s 2026-03-05 00:39:32.425834 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.12s 2026-03-05 00:39:32.425845 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.11s 2026-03-05 00:39:32.425856 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.87s 2026-03-05 00:39:32.425867 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.81s 2026-03-05 00:39:32.425877 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.62s 2026-03-05 00:39:32.425888 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.50s 2026-03-05 00:39:32.425899 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-03-05 00:39:32.425910 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2026-03-05 00:39:32.425921 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2026-03-05 00:39:32.716061 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-05 00:39:32.749365 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-05 00:39:32.749489 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-05 00:39:32.832168 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 168 0 --:--:-- --:--:-- --:--:-- 168 2026-03-05 00:39:32.848995 | orchestrator | + osism apply --environment custom workarounds 2026-03-05 00:39:34.792340 | orchestrator | 2026-03-05 00:39:34 | INFO  | Trying to run play workarounds in environment custom 2026-03-05 00:39:44.886408 | orchestrator | 2026-03-05 00:39:44 | INFO  | Task 9228be4b-a73c-482e-91ac-225cb28a1e77 (workarounds) was prepared for execution. 2026-03-05 00:39:44.886573 | orchestrator | 2026-03-05 00:39:44 | INFO  | It takes a moment until task 9228be4b-a73c-482e-91ac-225cb28a1e77 (workarounds) has been started and output is visible here. 2026-03-05 00:40:08.136883 | orchestrator | 2026-03-05 00:40:08.136981 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:40:08.136994 | orchestrator | 2026-03-05 00:40:08.137003 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-05 00:40:08.137011 | orchestrator | Thursday 05 March 2026 00:39:48 +0000 (0:00:00.092) 0:00:00.092 ******** 2026-03-05 00:40:08.137062 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-05 00:40:08.137078 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-05 00:40:08.137086 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-05 00:40:08.137095 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-05 00:40:08.137103 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-05 00:40:08.137111 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-05 00:40:08.137119 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-05 00:40:08.137127 | orchestrator | 2026-03-05 00:40:08.137135 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-05 00:40:08.137143 | orchestrator | 2026-03-05 00:40:08.137151 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-05 00:40:08.137159 | orchestrator | Thursday 05 March 2026 00:39:49 +0000 (0:00:00.651) 0:00:00.744 ******** 2026-03-05 00:40:08.137167 | orchestrator | ok: [testbed-manager] 2026-03-05 00:40:08.137176 | orchestrator | 2026-03-05 00:40:08.137206 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-05 00:40:08.137215 | orchestrator | 2026-03-05 00:40:08.137223 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-05 00:40:08.137231 | orchestrator | Thursday 05 March 2026 00:39:51 +0000 (0:00:02.064) 0:00:02.809 ******** 2026-03-05 00:40:08.137239 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:40:08.137247 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:40:08.137255 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:40:08.137262 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:40:08.137270 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:40:08.137278 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:40:08.137286 | orchestrator | 2026-03-05 00:40:08.137293 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-05 00:40:08.137301 | orchestrator | 2026-03-05 00:40:08.137309 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-05 00:40:08.137330 | orchestrator | Thursday 05 March 2026 00:39:53 +0000 (0:00:01.785) 0:00:04.594 ******** 2026-03-05 00:40:08.137339 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:40:08.137348 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:40:08.137356 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:40:08.137364 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:40:08.137373 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:40:08.137383 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-05 00:40:08.137392 | orchestrator | 2026-03-05 00:40:08.137401 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-05 00:40:08.137411 | orchestrator | Thursday 05 March 2026 00:39:54 +0000 (0:00:01.249) 0:00:05.844 ******** 2026-03-05 00:40:08.137420 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:40:08.137430 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:40:08.137439 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:40:08.137448 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:40:08.137457 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:40:08.137466 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:40:08.137475 | orchestrator | 2026-03-05 00:40:08.137485 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-05 00:40:08.137494 | orchestrator | Thursday 05 March 2026 00:39:57 +0000 (0:00:03.548) 0:00:09.392 ******** 2026-03-05 00:40:08.137504 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:40:08.137513 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:40:08.137523 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:40:08.137532 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:40:08.137541 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:40:08.137551 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:40:08.137560 | orchestrator | 2026-03-05 00:40:08.137569 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-05 00:40:08.137579 | orchestrator | 2026-03-05 00:40:08.137588 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-05 00:40:08.137597 | orchestrator | Thursday 05 March 2026 00:39:58 +0000 (0:00:00.570) 0:00:09.962 ******** 2026-03-05 00:40:08.137606 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:40:08.137616 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:40:08.137623 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:40:08.137631 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:40:08.137639 | orchestrator | changed: [testbed-manager] 2026-03-05 00:40:08.137647 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:40:08.137654 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:40:08.137674 | orchestrator | 2026-03-05 00:40:08.137693 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-05 00:40:08.137711 | orchestrator | Thursday 05 March 2026 00:39:59 +0000 (0:00:01.395) 0:00:11.358 ******** 2026-03-05 00:40:08.137723 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:40:08.137735 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:40:08.137747 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:40:08.137759 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:40:08.137772 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:40:08.137785 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:40:08.137819 | orchestrator | changed: [testbed-manager] 2026-03-05 00:40:08.137833 | orchestrator | 2026-03-05 00:40:08.137847 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-05 00:40:08.137861 | orchestrator | Thursday 05 March 2026 00:40:01 +0000 (0:00:01.409) 0:00:12.768 ******** 2026-03-05 00:40:08.137875 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:40:08.137884 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:40:08.137891 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:40:08.137899 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:40:08.137907 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:40:08.137915 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:40:08.137923 | orchestrator | ok: [testbed-manager] 2026-03-05 00:40:08.137931 | orchestrator | 2026-03-05 00:40:08.137939 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-05 00:40:08.137947 | orchestrator | Thursday 05 March 2026 00:40:02 +0000 (0:00:01.489) 0:00:14.257 ******** 2026-03-05 00:40:08.137955 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:40:08.137962 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:40:08.137970 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:40:08.137978 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:40:08.137986 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:40:08.137994 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:40:08.138001 | orchestrator | changed: [testbed-manager] 2026-03-05 00:40:08.138009 | orchestrator | 2026-03-05 00:40:08.138083 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-05 00:40:08.138091 | orchestrator | Thursday 05 March 2026 00:40:04 +0000 (0:00:02.011) 0:00:16.269 ******** 2026-03-05 00:40:08.138183 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:40:08.138203 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:40:08.138219 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:40:08.138231 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:40:08.138242 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:40:08.138253 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:40:08.138265 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:40:08.138277 | orchestrator | 2026-03-05 00:40:08.138289 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-05 00:40:08.138302 | orchestrator | 2026-03-05 00:40:08.138314 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-05 00:40:08.138325 | orchestrator | Thursday 05 March 2026 00:40:05 +0000 (0:00:00.659) 0:00:16.928 ******** 2026-03-05 00:40:08.138337 | orchestrator | ok: [testbed-manager] 2026-03-05 00:40:08.138348 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:40:08.138360 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:40:08.138372 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:40:08.138384 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:40:08.138396 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:40:08.138418 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:40:08.138432 | orchestrator | 2026-03-05 00:40:08.138445 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:40:08.138460 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:40:08.138476 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:08.138500 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:08.138514 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:08.138528 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:08.138541 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:08.138554 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:08.138565 | orchestrator | 2026-03-05 00:40:08.138573 | orchestrator | 2026-03-05 00:40:08.138581 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:40:08.138589 | orchestrator | Thursday 05 March 2026 00:40:08 +0000 (0:00:02.723) 0:00:19.652 ******** 2026-03-05 00:40:08.138597 | orchestrator | =============================================================================== 2026-03-05 00:40:08.138604 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.55s 2026-03-05 00:40:08.138612 | orchestrator | Install python3-docker -------------------------------------------------- 2.72s 2026-03-05 00:40:08.138620 | orchestrator | Apply netplan configuration --------------------------------------------- 2.06s 2026-03-05 00:40:08.138628 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.01s 2026-03-05 00:40:08.138636 | orchestrator | Apply netplan configuration --------------------------------------------- 1.79s 2026-03-05 00:40:08.138644 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2026-03-05 00:40:08.138652 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.41s 2026-03-05 00:40:08.138660 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.40s 2026-03-05 00:40:08.138667 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.25s 2026-03-05 00:40:08.138675 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-03-05 00:40:08.138683 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.65s 2026-03-05 00:40:08.138702 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.57s 2026-03-05 00:40:08.735759 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-05 00:40:20.747576 | orchestrator | 2026-03-05 00:40:20 | INFO  | Task 21c1315b-0533-4f20-9881-ce60e5f62857 (reboot) was prepared for execution. 2026-03-05 00:40:20.747702 | orchestrator | 2026-03-05 00:40:20 | INFO  | It takes a moment until task 21c1315b-0533-4f20-9881-ce60e5f62857 (reboot) has been started and output is visible here. 2026-03-05 00:40:30.289569 | orchestrator | 2026-03-05 00:40:30.289724 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:40:30.289747 | orchestrator | 2026-03-05 00:40:30.289762 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:40:30.289776 | orchestrator | Thursday 05 March 2026 00:40:24 +0000 (0:00:00.147) 0:00:00.147 ******** 2026-03-05 00:40:30.289791 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:40:30.289806 | orchestrator | 2026-03-05 00:40:30.289820 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:40:30.289829 | orchestrator | Thursday 05 March 2026 00:40:24 +0000 (0:00:00.084) 0:00:00.232 ******** 2026-03-05 00:40:30.289837 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:40:30.289845 | orchestrator | 2026-03-05 00:40:30.289853 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:40:30.289883 | orchestrator | Thursday 05 March 2026 00:40:25 +0000 (0:00:00.886) 0:00:01.118 ******** 2026-03-05 00:40:30.289892 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:40:30.289900 | orchestrator | 2026-03-05 00:40:30.289908 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:40:30.289916 | orchestrator | 2026-03-05 00:40:30.289930 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:40:30.289943 | orchestrator | Thursday 05 March 2026 00:40:25 +0000 (0:00:00.111) 0:00:01.230 ******** 2026-03-05 00:40:30.289955 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:40:30.289968 | orchestrator | 2026-03-05 00:40:30.289980 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:40:30.289993 | orchestrator | Thursday 05 March 2026 00:40:25 +0000 (0:00:00.078) 0:00:01.308 ******** 2026-03-05 00:40:30.290006 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:40:30.290126 | orchestrator | 2026-03-05 00:40:30.290145 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:40:30.290170 | orchestrator | Thursday 05 March 2026 00:40:26 +0000 (0:00:00.646) 0:00:01.955 ******** 2026-03-05 00:40:30.290180 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:40:30.290189 | orchestrator | 2026-03-05 00:40:30.290199 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:40:30.290208 | orchestrator | 2026-03-05 00:40:30.290216 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:40:30.290228 | orchestrator | Thursday 05 March 2026 00:40:26 +0000 (0:00:00.110) 0:00:02.065 ******** 2026-03-05 00:40:30.290242 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:40:30.290256 | orchestrator | 2026-03-05 00:40:30.290270 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:40:30.290283 | orchestrator | Thursday 05 March 2026 00:40:26 +0000 (0:00:00.147) 0:00:02.213 ******** 2026-03-05 00:40:30.290298 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:40:30.290312 | orchestrator | 2026-03-05 00:40:30.290326 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:40:30.290342 | orchestrator | Thursday 05 March 2026 00:40:27 +0000 (0:00:00.673) 0:00:02.886 ******** 2026-03-05 00:40:30.290362 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:40:30.290377 | orchestrator | 2026-03-05 00:40:30.290391 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:40:30.290425 | orchestrator | 2026-03-05 00:40:30.290439 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:40:30.290449 | orchestrator | Thursday 05 March 2026 00:40:27 +0000 (0:00:00.097) 0:00:02.984 ******** 2026-03-05 00:40:30.290457 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:40:30.290466 | orchestrator | 2026-03-05 00:40:30.290480 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:40:30.290493 | orchestrator | Thursday 05 March 2026 00:40:27 +0000 (0:00:00.087) 0:00:03.071 ******** 2026-03-05 00:40:30.290507 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:40:30.290520 | orchestrator | 2026-03-05 00:40:30.290533 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:40:30.290545 | orchestrator | Thursday 05 March 2026 00:40:28 +0000 (0:00:00.672) 0:00:03.743 ******** 2026-03-05 00:40:30.290553 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:40:30.290561 | orchestrator | 2026-03-05 00:40:30.290569 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:40:30.290576 | orchestrator | 2026-03-05 00:40:30.290587 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:40:30.290600 | orchestrator | Thursday 05 March 2026 00:40:28 +0000 (0:00:00.113) 0:00:03.857 ******** 2026-03-05 00:40:30.290614 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:40:30.290629 | orchestrator | 2026-03-05 00:40:30.290643 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:40:30.290656 | orchestrator | Thursday 05 March 2026 00:40:28 +0000 (0:00:00.121) 0:00:03.978 ******** 2026-03-05 00:40:30.290709 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:40:30.290735 | orchestrator | 2026-03-05 00:40:30.290750 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:40:30.290765 | orchestrator | Thursday 05 March 2026 00:40:29 +0000 (0:00:00.685) 0:00:04.664 ******** 2026-03-05 00:40:30.290778 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:40:30.290807 | orchestrator | 2026-03-05 00:40:30.290822 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-05 00:40:30.290833 | orchestrator | 2026-03-05 00:40:30.290841 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-05 00:40:30.290857 | orchestrator | Thursday 05 March 2026 00:40:29 +0000 (0:00:00.118) 0:00:04.782 ******** 2026-03-05 00:40:30.290865 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:40:30.290873 | orchestrator | 2026-03-05 00:40:30.290881 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-05 00:40:30.290888 | orchestrator | Thursday 05 March 2026 00:40:29 +0000 (0:00:00.111) 0:00:04.893 ******** 2026-03-05 00:40:30.290896 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:40:30.290904 | orchestrator | 2026-03-05 00:40:30.290912 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-05 00:40:30.290919 | orchestrator | Thursday 05 March 2026 00:40:29 +0000 (0:00:00.686) 0:00:05.579 ******** 2026-03-05 00:40:30.290944 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:40:30.290953 | orchestrator | 2026-03-05 00:40:30.290968 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:40:30.290985 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:30.291001 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:30.291016 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:30.291062 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:30.291077 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:30.291090 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:40:30.291104 | orchestrator | 2026-03-05 00:40:30.291117 | orchestrator | 2026-03-05 00:40:30.291130 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:40:30.291143 | orchestrator | Thursday 05 March 2026 00:40:29 +0000 (0:00:00.034) 0:00:05.614 ******** 2026-03-05 00:40:30.291162 | orchestrator | =============================================================================== 2026-03-05 00:40:30.291170 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.25s 2026-03-05 00:40:30.291178 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2026-03-05 00:40:30.291185 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2026-03-05 00:40:30.612175 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-05 00:40:42.681718 | orchestrator | 2026-03-05 00:40:42 | INFO  | Task b34988ff-00b4-4f28-9578-ffde93532b8c (wait-for-connection) was prepared for execution. 2026-03-05 00:40:42.681810 | orchestrator | 2026-03-05 00:40:42 | INFO  | It takes a moment until task b34988ff-00b4-4f28-9578-ffde93532b8c (wait-for-connection) has been started and output is visible here. 2026-03-05 00:40:58.596968 | orchestrator | 2026-03-05 00:40:58.597167 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-05 00:40:58.597186 | orchestrator | 2026-03-05 00:40:58.597198 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-05 00:40:58.597210 | orchestrator | Thursday 05 March 2026 00:40:46 +0000 (0:00:00.218) 0:00:00.218 ******** 2026-03-05 00:40:58.597220 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:40:58.597232 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:40:58.597243 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:40:58.597254 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:40:58.597264 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:40:58.597275 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:40:58.597286 | orchestrator | 2026-03-05 00:40:58.597297 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:40:58.597308 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:40:58.597321 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:40:58.597332 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:40:58.597342 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:40:58.597353 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:40:58.597364 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:40:58.597374 | orchestrator | 2026-03-05 00:40:58.597386 | orchestrator | 2026-03-05 00:40:58.597397 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:40:58.597408 | orchestrator | Thursday 05 March 2026 00:40:58 +0000 (0:00:11.580) 0:00:11.798 ******** 2026-03-05 00:40:58.597418 | orchestrator | =============================================================================== 2026-03-05 00:40:58.597429 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-03-05 00:40:58.791725 | orchestrator | + osism apply hddtemp 2026-03-05 00:41:10.613185 | orchestrator | 2026-03-05 00:41:10 | INFO  | Task 3386b79e-74bf-4bd0-ae69-94ec8ffd35d7 (hddtemp) was prepared for execution. 2026-03-05 00:41:10.613299 | orchestrator | 2026-03-05 00:41:10 | INFO  | It takes a moment until task 3386b79e-74bf-4bd0-ae69-94ec8ffd35d7 (hddtemp) has been started and output is visible here. 2026-03-05 00:41:38.454256 | orchestrator | 2026-03-05 00:41:38.454386 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-05 00:41:38.454414 | orchestrator | 2026-03-05 00:41:38.454434 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-05 00:41:38.454455 | orchestrator | Thursday 05 March 2026 00:41:14 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-03-05 00:41:38.454475 | orchestrator | ok: [testbed-manager] 2026-03-05 00:41:38.454490 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:41:38.454501 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:41:38.454512 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:41:38.454523 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:41:38.454534 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:41:38.454545 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:41:38.454555 | orchestrator | 2026-03-05 00:41:38.454567 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-05 00:41:38.454577 | orchestrator | Thursday 05 March 2026 00:41:15 +0000 (0:00:00.690) 0:00:00.936 ******** 2026-03-05 00:41:38.454591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:41:38.454653 | orchestrator | 2026-03-05 00:41:38.454678 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-05 00:41:38.454696 | orchestrator | Thursday 05 March 2026 00:41:16 +0000 (0:00:01.138) 0:00:02.075 ******** 2026-03-05 00:41:38.454713 | orchestrator | ok: [testbed-manager] 2026-03-05 00:41:38.454731 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:41:38.454749 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:41:38.454767 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:41:38.454786 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:41:38.454805 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:41:38.454822 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:41:38.454841 | orchestrator | 2026-03-05 00:41:38.454860 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-05 00:41:38.454887 | orchestrator | Thursday 05 March 2026 00:41:18 +0000 (0:00:02.066) 0:00:04.142 ******** 2026-03-05 00:41:38.454898 | orchestrator | changed: [testbed-manager] 2026-03-05 00:41:38.454911 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:41:38.454921 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:41:38.454932 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:41:38.454943 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:41:38.454953 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:41:38.454964 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:41:38.454974 | orchestrator | 2026-03-05 00:41:38.454985 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-05 00:41:38.454996 | orchestrator | Thursday 05 March 2026 00:41:19 +0000 (0:00:01.132) 0:00:05.274 ******** 2026-03-05 00:41:38.455007 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:41:38.455017 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:41:38.455028 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:41:38.455039 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:41:38.455049 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:41:38.455060 | orchestrator | ok: [testbed-manager] 2026-03-05 00:41:38.455070 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:41:38.455105 | orchestrator | 2026-03-05 00:41:38.455117 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-05 00:41:38.455128 | orchestrator | Thursday 05 March 2026 00:41:21 +0000 (0:00:01.248) 0:00:06.523 ******** 2026-03-05 00:41:38.455139 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:41:38.455149 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:41:38.455160 | orchestrator | changed: [testbed-manager] 2026-03-05 00:41:38.455171 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:41:38.455181 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:41:38.455192 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:41:38.455202 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:41:38.455213 | orchestrator | 2026-03-05 00:41:38.455224 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-05 00:41:38.455234 | orchestrator | Thursday 05 March 2026 00:41:21 +0000 (0:00:00.790) 0:00:07.314 ******** 2026-03-05 00:41:38.455245 | orchestrator | changed: [testbed-manager] 2026-03-05 00:41:38.455256 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:41:38.455267 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:41:38.455277 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:41:38.455288 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:41:38.455299 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:41:38.455309 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:41:38.455320 | orchestrator | 2026-03-05 00:41:38.455331 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-05 00:41:38.455341 | orchestrator | Thursday 05 March 2026 00:41:35 +0000 (0:00:13.606) 0:00:20.920 ******** 2026-03-05 00:41:38.455353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:41:38.455364 | orchestrator | 2026-03-05 00:41:38.455386 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-05 00:41:38.455398 | orchestrator | Thursday 05 March 2026 00:41:36 +0000 (0:00:01.052) 0:00:21.972 ******** 2026-03-05 00:41:38.455409 | orchestrator | changed: [testbed-manager] 2026-03-05 00:41:38.455420 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:41:38.455431 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:41:38.455442 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:41:38.455459 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:41:38.455478 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:41:38.455494 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:41:38.455510 | orchestrator | 2026-03-05 00:41:38.455528 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:41:38.455544 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:41:38.455590 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:41:38.455608 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:41:38.455627 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:41:38.455643 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:41:38.455661 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:41:38.455678 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:41:38.455696 | orchestrator | 2026-03-05 00:41:38.455714 | orchestrator | 2026-03-05 00:41:38.455733 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:41:38.455750 | orchestrator | Thursday 05 March 2026 00:41:38 +0000 (0:00:01.751) 0:00:23.724 ******** 2026-03-05 00:41:38.455768 | orchestrator | =============================================================================== 2026-03-05 00:41:38.455787 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.61s 2026-03-05 00:41:38.455807 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.07s 2026-03-05 00:41:38.455826 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.75s 2026-03-05 00:41:38.455852 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.25s 2026-03-05 00:41:38.455863 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.14s 2026-03-05 00:41:38.455874 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.13s 2026-03-05 00:41:38.455885 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.05s 2026-03-05 00:41:38.455895 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2026-03-05 00:41:38.455906 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.69s 2026-03-05 00:41:38.658329 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-05 00:41:38.693008 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-05 00:41:38.693261 | orchestrator | + sudo systemctl restart manager.service 2026-03-05 00:41:56.296733 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-05 00:41:56.296838 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-05 00:41:56.296855 | orchestrator | + local max_attempts=60 2026-03-05 00:41:56.296868 | orchestrator | + local name=ceph-ansible 2026-03-05 00:41:56.296880 | orchestrator | + local attempt_num=1 2026-03-05 00:41:56.296892 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:41:56.327406 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:41:56.327510 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:41:56.327526 | orchestrator | + sleep 5 2026-03-05 00:42:01.332636 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:01.366151 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:01.366237 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:01.366756 | orchestrator | + sleep 5 2026-03-05 00:42:06.369142 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:06.407512 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:06.407653 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:06.407669 | orchestrator | + sleep 5 2026-03-05 00:42:11.412673 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:11.446814 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:11.446931 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:11.446954 | orchestrator | + sleep 5 2026-03-05 00:42:16.450796 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:16.487934 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:16.487997 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:16.488003 | orchestrator | + sleep 5 2026-03-05 00:42:21.492769 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:21.523410 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:21.523544 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:21.523571 | orchestrator | + sleep 5 2026-03-05 00:42:26.527826 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:26.567130 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:26.567218 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:26.567239 | orchestrator | + sleep 5 2026-03-05 00:42:31.571257 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:31.602839 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:31.602961 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:31.602979 | orchestrator | + sleep 5 2026-03-05 00:42:36.605764 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:36.681196 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:36.681297 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:36.681311 | orchestrator | + sleep 5 2026-03-05 00:42:41.683574 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:41.717156 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:41.717238 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:41.717253 | orchestrator | + sleep 5 2026-03-05 00:42:46.721375 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:46.759462 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:46.759561 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:46.759584 | orchestrator | + sleep 5 2026-03-05 00:42:51.762825 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:51.796902 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:51.796989 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:51.797001 | orchestrator | + sleep 5 2026-03-05 00:42:56.800323 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:42:56.833668 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-05 00:42:56.833752 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-05 00:42:56.833763 | orchestrator | + sleep 5 2026-03-05 00:43:01.837593 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-05 00:43:01.872358 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:43:01.872443 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-05 00:43:01.872454 | orchestrator | + local max_attempts=60 2026-03-05 00:43:01.872462 | orchestrator | + local name=kolla-ansible 2026-03-05 00:43:01.872469 | orchestrator | + local attempt_num=1 2026-03-05 00:43:01.873030 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-05 00:43:01.897582 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:43:01.897663 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-05 00:43:01.897673 | orchestrator | + local max_attempts=60 2026-03-05 00:43:01.897751 | orchestrator | + local name=osism-ansible 2026-03-05 00:43:01.897759 | orchestrator | + local attempt_num=1 2026-03-05 00:43:01.898203 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-05 00:43:01.932153 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-05 00:43:01.932235 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-05 00:43:01.932244 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-05 00:43:02.110519 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-05 00:43:02.231379 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-05 00:43:02.376825 | orchestrator | ARA in osism-ansible already disabled. 2026-03-05 00:43:02.523625 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-05 00:43:02.526352 | orchestrator | + osism apply gather-facts 2026-03-05 00:43:14.578486 | orchestrator | 2026-03-05 00:43:14 | INFO  | Task 59941a08-1be1-491f-b2c1-47aaa45672aa (gather-facts) was prepared for execution. 2026-03-05 00:43:14.578594 | orchestrator | 2026-03-05 00:43:14 | INFO  | It takes a moment until task 59941a08-1be1-491f-b2c1-47aaa45672aa (gather-facts) has been started and output is visible here. 2026-03-05 00:43:28.055056 | orchestrator | 2026-03-05 00:43:28.055199 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:43:28.055219 | orchestrator | 2026-03-05 00:43:28.055230 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:43:28.055243 | orchestrator | Thursday 05 March 2026 00:43:18 +0000 (0:00:00.229) 0:00:00.229 ******** 2026-03-05 00:43:28.055255 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:43:28.055268 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:43:28.055279 | orchestrator | ok: [testbed-manager] 2026-03-05 00:43:28.055290 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:43:28.055301 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:43:28.055312 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:43:28.055323 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:43:28.055334 | orchestrator | 2026-03-05 00:43:28.055345 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-05 00:43:28.055356 | orchestrator | 2026-03-05 00:43:28.055367 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-05 00:43:28.055378 | orchestrator | Thursday 05 March 2026 00:43:27 +0000 (0:00:08.653) 0:00:08.883 ******** 2026-03-05 00:43:28.055389 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:43:28.055401 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:43:28.055412 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:43:28.055423 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:43:28.055434 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:43:28.055445 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:43:28.055456 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:43:28.055466 | orchestrator | 2026-03-05 00:43:28.055477 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:43:28.055494 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:43:28.055518 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:43:28.055548 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:43:28.055565 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:43:28.055581 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:43:28.055600 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:43:28.055619 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 00:43:28.055672 | orchestrator | 2026-03-05 00:43:28.055694 | orchestrator | 2026-03-05 00:43:28.055713 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:43:28.055728 | orchestrator | Thursday 05 March 2026 00:43:27 +0000 (0:00:00.491) 0:00:09.375 ******** 2026-03-05 00:43:28.055741 | orchestrator | =============================================================================== 2026-03-05 00:43:28.055754 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.65s 2026-03-05 00:43:28.055767 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-03-05 00:43:28.252911 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-05 00:43:28.268148 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-05 00:43:28.283407 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-05 00:43:28.294469 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-05 00:43:28.305457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-05 00:43:28.316232 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-05 00:43:28.325822 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-05 00:43:28.333566 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-05 00:43:28.342184 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-05 00:43:28.350250 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-05 00:43:28.360531 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-05 00:43:28.370633 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-05 00:43:28.387208 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-05 00:43:28.395825 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-05 00:43:28.410476 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-05 00:43:28.420266 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-05 00:43:28.435411 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-05 00:43:28.445193 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-05 00:43:28.461250 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-05 00:43:28.474578 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-05 00:43:28.486299 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-05 00:43:28.502667 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-05 00:43:28.516370 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-05 00:43:28.537320 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-05 00:43:28.743040 | orchestrator | ok: Runtime: 0:24:11.685327 2026-03-05 00:43:28.844635 | 2026-03-05 00:43:28.845292 | TASK [Deploy services] 2026-03-05 00:43:29.381649 | orchestrator | skipping: Conditional result was False 2026-03-05 00:43:29.400204 | 2026-03-05 00:43:29.400379 | TASK [Deploy in a nutshell] 2026-03-05 00:43:30.214303 | orchestrator | + set -e 2026-03-05 00:43:30.214558 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-05 00:43:30.214594 | orchestrator | ++ export INTERACTIVE=false 2026-03-05 00:43:30.214621 | orchestrator | ++ INTERACTIVE=false 2026-03-05 00:43:30.214647 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-05 00:43:30.214667 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-05 00:43:30.214686 | orchestrator | + source /opt/manager-vars.sh 2026-03-05 00:43:30.214762 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-05 00:43:30.214808 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-05 00:43:30.214837 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-05 00:43:30.214876 | orchestrator | ++ CEPH_VERSION=reef 2026-03-05 00:43:30.214895 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-05 00:43:30.214920 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-05 00:43:30.214939 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-05 00:43:30.214966 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-05 00:43:30.214977 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-05 00:43:30.214990 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-05 00:43:30.215001 | orchestrator | ++ export ARA=false 2026-03-05 00:43:30.215011 | orchestrator | ++ ARA=false 2026-03-05 00:43:30.215021 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-05 00:43:30.215032 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-05 00:43:30.215041 | orchestrator | ++ export TEMPEST=true 2026-03-05 00:43:30.215051 | orchestrator | ++ TEMPEST=true 2026-03-05 00:43:30.215060 | orchestrator | ++ export IS_ZUUL=true 2026-03-05 00:43:30.215104 | orchestrator | ++ IS_ZUUL=true 2026-03-05 00:43:30.215126 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.20 2026-03-05 00:43:30.215150 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.20 2026-03-05 00:43:30.215166 | orchestrator | ++ export EXTERNAL_API=false 2026-03-05 00:43:30.215181 | orchestrator | ++ EXTERNAL_API=false 2026-03-05 00:43:30.215196 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-05 00:43:30.215212 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-05 00:43:30.215229 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-05 00:43:30.215245 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-05 00:43:30.215264 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-05 00:43:30.215281 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-05 00:43:30.215298 | orchestrator | + echo 2026-03-05 00:43:30.215309 | orchestrator | 2026-03-05 00:43:30.215319 | orchestrator | # PULL IMAGES 2026-03-05 00:43:30.215329 | orchestrator | 2026-03-05 00:43:30.215339 | orchestrator | + echo '# PULL IMAGES' 2026-03-05 00:43:30.215349 | orchestrator | + echo 2026-03-05 00:43:30.216311 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-05 00:43:30.246950 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-05 00:43:30.247065 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-05 00:43:32.023025 | orchestrator | 2026-03-05 00:43:32 | INFO  | Trying to run play pull-images in environment custom 2026-03-05 00:43:42.155569 | orchestrator | 2026-03-05 00:43:42 | INFO  | Task b0ae6d6f-bd8d-4966-bdcf-f72440d62a3e (pull-images) was prepared for execution. 2026-03-05 00:43:42.155693 | orchestrator | 2026-03-05 00:43:42 | INFO  | Task b0ae6d6f-bd8d-4966-bdcf-f72440d62a3e is running in background. No more output. Check ARA for logs. 2026-03-05 00:43:44.458001 | orchestrator | 2026-03-05 00:43:44 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-05 00:43:54.617104 | orchestrator | 2026-03-05 00:43:54 | INFO  | Task 193a7eec-e5dc-443e-b16d-1f8da49fed30 (wipe-partitions) was prepared for execution. 2026-03-05 00:43:54.617285 | orchestrator | 2026-03-05 00:43:54 | INFO  | It takes a moment until task 193a7eec-e5dc-443e-b16d-1f8da49fed30 (wipe-partitions) has been started and output is visible here. 2026-03-05 00:44:07.564667 | orchestrator | 2026-03-05 00:44:07.564794 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-05 00:44:07.564812 | orchestrator | 2026-03-05 00:44:07.564824 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-05 00:44:07.564843 | orchestrator | Thursday 05 March 2026 00:43:58 +0000 (0:00:00.126) 0:00:00.126 ******** 2026-03-05 00:44:07.564858 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:44:07.564870 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:44:07.564881 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:44:07.564892 | orchestrator | 2026-03-05 00:44:07.564904 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-05 00:44:07.564991 | orchestrator | Thursday 05 March 2026 00:43:59 +0000 (0:00:00.592) 0:00:00.718 ******** 2026-03-05 00:44:07.565004 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:07.565016 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:44:07.565028 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.565129 | orchestrator | 2026-03-05 00:44:07.565144 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-05 00:44:07.565155 | orchestrator | Thursday 05 March 2026 00:43:59 +0000 (0:00:00.363) 0:00:01.081 ******** 2026-03-05 00:44:07.565169 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:44:07.565183 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:44:07.565196 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:07.565209 | orchestrator | 2026-03-05 00:44:07.565223 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-05 00:44:07.565236 | orchestrator | Thursday 05 March 2026 00:44:00 +0000 (0:00:00.594) 0:00:01.676 ******** 2026-03-05 00:44:07.565250 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:07.565262 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:44:07.565275 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:07.565288 | orchestrator | 2026-03-05 00:44:07.565301 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-05 00:44:07.565314 | orchestrator | Thursday 05 March 2026 00:44:00 +0000 (0:00:00.240) 0:00:01.916 ******** 2026-03-05 00:44:07.565328 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-05 00:44:07.565346 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-05 00:44:07.565359 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-05 00:44:07.565372 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-05 00:44:07.565386 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-05 00:44:07.565400 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-05 00:44:07.565412 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-05 00:44:07.565425 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-05 00:44:07.565440 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-05 00:44:07.565452 | orchestrator | 2026-03-05 00:44:07.565466 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-05 00:44:07.565479 | orchestrator | Thursday 05 March 2026 00:44:01 +0000 (0:00:01.277) 0:00:03.194 ******** 2026-03-05 00:44:07.565492 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-05 00:44:07.565507 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-05 00:44:07.565519 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-05 00:44:07.565530 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-05 00:44:07.565541 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-05 00:44:07.565552 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-05 00:44:07.565563 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-05 00:44:07.565574 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-05 00:44:07.565585 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-05 00:44:07.565596 | orchestrator | 2026-03-05 00:44:07.565607 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-05 00:44:07.565618 | orchestrator | Thursday 05 March 2026 00:44:03 +0000 (0:00:01.667) 0:00:04.862 ******** 2026-03-05 00:44:07.565628 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-05 00:44:07.565639 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-05 00:44:07.565650 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-05 00:44:07.565661 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-05 00:44:07.565679 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-05 00:44:07.565690 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-05 00:44:07.565701 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-05 00:44:07.565712 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-05 00:44:07.565750 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-05 00:44:07.565763 | orchestrator | 2026-03-05 00:44:07.565774 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-05 00:44:07.565785 | orchestrator | Thursday 05 March 2026 00:44:05 +0000 (0:00:02.378) 0:00:07.241 ******** 2026-03-05 00:44:07.565796 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:44:07.565807 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:44:07.565818 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:44:07.565829 | orchestrator | 2026-03-05 00:44:07.565839 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-05 00:44:07.565851 | orchestrator | Thursday 05 March 2026 00:44:06 +0000 (0:00:00.630) 0:00:07.871 ******** 2026-03-05 00:44:07.565862 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:44:07.565872 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:44:07.565883 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:44:07.565894 | orchestrator | 2026-03-05 00:44:07.565905 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:44:07.565917 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:07.565930 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:07.565961 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:07.565973 | orchestrator | 2026-03-05 00:44:07.565984 | orchestrator | 2026-03-05 00:44:07.565995 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:44:07.566006 | orchestrator | Thursday 05 March 2026 00:44:07 +0000 (0:00:00.677) 0:00:08.549 ******** 2026-03-05 00:44:07.566108 | orchestrator | =============================================================================== 2026-03-05 00:44:07.566130 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.38s 2026-03-05 00:44:07.566142 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.67s 2026-03-05 00:44:07.566153 | orchestrator | Check device availability ----------------------------------------------- 1.28s 2026-03-05 00:44:07.566164 | orchestrator | Request device events from the kernel ----------------------------------- 0.68s 2026-03-05 00:44:07.566175 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-03-05 00:44:07.566186 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-03-05 00:44:07.566197 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-03-05 00:44:07.566208 | orchestrator | Remove all rook related logical devices --------------------------------- 0.36s 2026-03-05 00:44:07.566219 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-03-05 00:44:19.974459 | orchestrator | 2026-03-05 00:44:19 | INFO  | Task b07da921-0c83-4ebd-b2df-cba184bb9bbe (facts) was prepared for execution. 2026-03-05 00:44:19.974572 | orchestrator | 2026-03-05 00:44:19 | INFO  | It takes a moment until task b07da921-0c83-4ebd-b2df-cba184bb9bbe (facts) has been started and output is visible here. 2026-03-05 00:44:33.007386 | orchestrator | 2026-03-05 00:44:33.007491 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-05 00:44:33.007505 | orchestrator | 2026-03-05 00:44:33.007517 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-05 00:44:33.007527 | orchestrator | Thursday 05 March 2026 00:44:24 +0000 (0:00:00.251) 0:00:00.251 ******** 2026-03-05 00:44:33.007538 | orchestrator | ok: [testbed-manager] 2026-03-05 00:44:33.007549 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:44:33.007559 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:44:33.007568 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:44:33.007636 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:44:33.007648 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:44:33.007658 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:33.007667 | orchestrator | 2026-03-05 00:44:33.007680 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-05 00:44:33.007690 | orchestrator | Thursday 05 March 2026 00:44:25 +0000 (0:00:01.122) 0:00:01.373 ******** 2026-03-05 00:44:33.007699 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:44:33.007710 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:44:33.007719 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:44:33.007729 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:44:33.007739 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:33.007748 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:44:33.007758 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:33.007767 | orchestrator | 2026-03-05 00:44:33.007777 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:44:33.007786 | orchestrator | 2026-03-05 00:44:33.007796 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:44:33.007806 | orchestrator | Thursday 05 March 2026 00:44:26 +0000 (0:00:01.189) 0:00:02.563 ******** 2026-03-05 00:44:33.007815 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:44:33.007825 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:44:33.007834 | orchestrator | ok: [testbed-manager] 2026-03-05 00:44:33.007845 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:44:33.007855 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:44:33.007865 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:44:33.007874 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:44:33.007884 | orchestrator | 2026-03-05 00:44:33.007896 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-05 00:44:33.007907 | orchestrator | 2026-03-05 00:44:33.007919 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-05 00:44:33.007947 | orchestrator | Thursday 05 March 2026 00:44:32 +0000 (0:00:05.681) 0:00:08.245 ******** 2026-03-05 00:44:33.007960 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:44:33.007971 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:44:33.007982 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:44:33.007994 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:44:33.008006 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:33.008017 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:44:33.008052 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:44:33.008064 | orchestrator | 2026-03-05 00:44:33.008075 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:44:33.008087 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:33.008100 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:33.008112 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:33.008124 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:33.008135 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:33.008147 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:33.008159 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:44:33.008170 | orchestrator | 2026-03-05 00:44:33.008181 | orchestrator | 2026-03-05 00:44:33.008194 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:44:33.008228 | orchestrator | Thursday 05 March 2026 00:44:32 +0000 (0:00:00.511) 0:00:08.756 ******** 2026-03-05 00:44:33.008247 | orchestrator | =============================================================================== 2026-03-05 00:44:33.008264 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.68s 2026-03-05 00:44:33.008280 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.19s 2026-03-05 00:44:33.008296 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-03-05 00:44:33.008312 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-03-05 00:44:35.327702 | orchestrator | 2026-03-05 00:44:35 | INFO  | Task 7517aa3e-b6bd-4414-a9ca-ee56a7f4a3c9 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-05 00:44:35.327784 | orchestrator | 2026-03-05 00:44:35 | INFO  | It takes a moment until task 7517aa3e-b6bd-4414-a9ca-ee56a7f4a3c9 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-05 00:44:46.682976 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 00:44:46.683136 | orchestrator | 2.16.14 2026-03-05 00:44:46.683166 | orchestrator | 2026-03-05 00:44:46.683180 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-05 00:44:46.683192 | orchestrator | 2026-03-05 00:44:46.683207 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:44:46.683219 | orchestrator | Thursday 05 March 2026 00:44:39 +0000 (0:00:00.314) 0:00:00.314 ******** 2026-03-05 00:44:46.683231 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-05 00:44:46.683243 | orchestrator | 2026-03-05 00:44:46.683255 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:44:46.683266 | orchestrator | Thursday 05 March 2026 00:44:39 +0000 (0:00:00.238) 0:00:00.552 ******** 2026-03-05 00:44:46.683277 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:44:46.683289 | orchestrator | 2026-03-05 00:44:46.683300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683311 | orchestrator | Thursday 05 March 2026 00:44:40 +0000 (0:00:00.212) 0:00:00.765 ******** 2026-03-05 00:44:46.683322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-05 00:44:46.683334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-05 00:44:46.683345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-05 00:44:46.683356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-05 00:44:46.683367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-05 00:44:46.683378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-05 00:44:46.683389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-05 00:44:46.683400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-05 00:44:46.683411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-05 00:44:46.683422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-05 00:44:46.683443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-05 00:44:46.683455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-05 00:44:46.683466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-05 00:44:46.683478 | orchestrator | 2026-03-05 00:44:46.683490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683503 | orchestrator | Thursday 05 March 2026 00:44:40 +0000 (0:00:00.472) 0:00:01.237 ******** 2026-03-05 00:44:46.683579 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.683593 | orchestrator | 2026-03-05 00:44:46.683607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683619 | orchestrator | Thursday 05 March 2026 00:44:40 +0000 (0:00:00.207) 0:00:01.445 ******** 2026-03-05 00:44:46.683633 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.683646 | orchestrator | 2026-03-05 00:44:46.683659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683671 | orchestrator | Thursday 05 March 2026 00:44:40 +0000 (0:00:00.189) 0:00:01.635 ******** 2026-03-05 00:44:46.683684 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.683697 | orchestrator | 2026-03-05 00:44:46.683710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683723 | orchestrator | Thursday 05 March 2026 00:44:41 +0000 (0:00:00.193) 0:00:01.828 ******** 2026-03-05 00:44:46.683740 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.683753 | orchestrator | 2026-03-05 00:44:46.683766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683778 | orchestrator | Thursday 05 March 2026 00:44:41 +0000 (0:00:00.197) 0:00:02.026 ******** 2026-03-05 00:44:46.683791 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.683805 | orchestrator | 2026-03-05 00:44:46.683818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683830 | orchestrator | Thursday 05 March 2026 00:44:41 +0000 (0:00:00.210) 0:00:02.236 ******** 2026-03-05 00:44:46.683843 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.683856 | orchestrator | 2026-03-05 00:44:46.683868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683879 | orchestrator | Thursday 05 March 2026 00:44:41 +0000 (0:00:00.201) 0:00:02.438 ******** 2026-03-05 00:44:46.683890 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.683901 | orchestrator | 2026-03-05 00:44:46.683913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683924 | orchestrator | Thursday 05 March 2026 00:44:41 +0000 (0:00:00.218) 0:00:02.657 ******** 2026-03-05 00:44:46.683935 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.683946 | orchestrator | 2026-03-05 00:44:46.683957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.683969 | orchestrator | Thursday 05 March 2026 00:44:42 +0000 (0:00:00.192) 0:00:02.849 ******** 2026-03-05 00:44:46.683980 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb) 2026-03-05 00:44:46.683992 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb) 2026-03-05 00:44:46.684003 | orchestrator | 2026-03-05 00:44:46.684036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.684068 | orchestrator | Thursday 05 March 2026 00:44:42 +0000 (0:00:00.405) 0:00:03.254 ******** 2026-03-05 00:44:46.684080 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4) 2026-03-05 00:44:46.684092 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4) 2026-03-05 00:44:46.684103 | orchestrator | 2026-03-05 00:44:46.684114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.684125 | orchestrator | Thursday 05 March 2026 00:44:43 +0000 (0:00:00.584) 0:00:03.839 ******** 2026-03-05 00:44:46.684136 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b) 2026-03-05 00:44:46.684147 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b) 2026-03-05 00:44:46.684159 | orchestrator | 2026-03-05 00:44:46.684170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.684181 | orchestrator | Thursday 05 March 2026 00:44:43 +0000 (0:00:00.596) 0:00:04.436 ******** 2026-03-05 00:44:46.684202 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada) 2026-03-05 00:44:46.684213 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada) 2026-03-05 00:44:46.684224 | orchestrator | 2026-03-05 00:44:46.684235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:44:46.684246 | orchestrator | Thursday 05 March 2026 00:44:44 +0000 (0:00:00.792) 0:00:05.229 ******** 2026-03-05 00:44:46.684258 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:44:46.684269 | orchestrator | 2026-03-05 00:44:46.684286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:46.684298 | orchestrator | Thursday 05 March 2026 00:44:44 +0000 (0:00:00.339) 0:00:05.568 ******** 2026-03-05 00:44:46.684309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-05 00:44:46.684320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-05 00:44:46.684331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-05 00:44:46.684342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-05 00:44:46.684353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-05 00:44:46.684364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-05 00:44:46.684374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-05 00:44:46.684385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-05 00:44:46.684396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-05 00:44:46.684407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-05 00:44:46.684418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-05 00:44:46.684429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-05 00:44:46.684440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-05 00:44:46.684451 | orchestrator | 2026-03-05 00:44:46.684463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:46.684474 | orchestrator | Thursday 05 March 2026 00:44:45 +0000 (0:00:00.356) 0:00:05.925 ******** 2026-03-05 00:44:46.684485 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.684496 | orchestrator | 2026-03-05 00:44:46.684507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:46.684518 | orchestrator | Thursday 05 March 2026 00:44:45 +0000 (0:00:00.202) 0:00:06.127 ******** 2026-03-05 00:44:46.684529 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.684540 | orchestrator | 2026-03-05 00:44:46.684551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:46.684562 | orchestrator | Thursday 05 March 2026 00:44:45 +0000 (0:00:00.209) 0:00:06.337 ******** 2026-03-05 00:44:46.684573 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.684584 | orchestrator | 2026-03-05 00:44:46.684595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:46.684606 | orchestrator | Thursday 05 March 2026 00:44:45 +0000 (0:00:00.200) 0:00:06.537 ******** 2026-03-05 00:44:46.684617 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.684628 | orchestrator | 2026-03-05 00:44:46.684639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:46.684650 | orchestrator | Thursday 05 March 2026 00:44:46 +0000 (0:00:00.201) 0:00:06.739 ******** 2026-03-05 00:44:46.684661 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.684679 | orchestrator | 2026-03-05 00:44:46.684691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:46.684702 | orchestrator | Thursday 05 March 2026 00:44:46 +0000 (0:00:00.205) 0:00:06.944 ******** 2026-03-05 00:44:46.684713 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.684724 | orchestrator | 2026-03-05 00:44:46.684735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:46.684746 | orchestrator | Thursday 05 March 2026 00:44:46 +0000 (0:00:00.208) 0:00:07.153 ******** 2026-03-05 00:44:46.684757 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:46.684768 | orchestrator | 2026-03-05 00:44:46.684785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:54.182402 | orchestrator | Thursday 05 March 2026 00:44:46 +0000 (0:00:00.202) 0:00:07.355 ******** 2026-03-05 00:44:54.182496 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182507 | orchestrator | 2026-03-05 00:44:54.182514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:54.182520 | orchestrator | Thursday 05 March 2026 00:44:46 +0000 (0:00:00.198) 0:00:07.554 ******** 2026-03-05 00:44:54.182526 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-05 00:44:54.182533 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-05 00:44:54.182539 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-05 00:44:54.182545 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-05 00:44:54.182551 | orchestrator | 2026-03-05 00:44:54.182556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:54.182562 | orchestrator | Thursday 05 March 2026 00:44:47 +0000 (0:00:00.991) 0:00:08.546 ******** 2026-03-05 00:44:54.182568 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182573 | orchestrator | 2026-03-05 00:44:54.182579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:54.182584 | orchestrator | Thursday 05 March 2026 00:44:48 +0000 (0:00:00.201) 0:00:08.748 ******** 2026-03-05 00:44:54.182590 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182596 | orchestrator | 2026-03-05 00:44:54.182601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:54.182607 | orchestrator | Thursday 05 March 2026 00:44:48 +0000 (0:00:00.207) 0:00:08.956 ******** 2026-03-05 00:44:54.182612 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182617 | orchestrator | 2026-03-05 00:44:54.182623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:44:54.182628 | orchestrator | Thursday 05 March 2026 00:44:48 +0000 (0:00:00.211) 0:00:09.167 ******** 2026-03-05 00:44:54.182634 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182639 | orchestrator | 2026-03-05 00:44:54.182645 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-05 00:44:54.182650 | orchestrator | Thursday 05 March 2026 00:44:48 +0000 (0:00:00.214) 0:00:09.382 ******** 2026-03-05 00:44:54.182656 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-05 00:44:54.182661 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-05 00:44:54.182667 | orchestrator | 2026-03-05 00:44:54.182687 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-05 00:44:54.182693 | orchestrator | Thursday 05 March 2026 00:44:48 +0000 (0:00:00.171) 0:00:09.553 ******** 2026-03-05 00:44:54.182699 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182704 | orchestrator | 2026-03-05 00:44:54.182710 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-05 00:44:54.182715 | orchestrator | Thursday 05 March 2026 00:44:49 +0000 (0:00:00.129) 0:00:09.683 ******** 2026-03-05 00:44:54.182721 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182726 | orchestrator | 2026-03-05 00:44:54.182732 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-05 00:44:54.182737 | orchestrator | Thursday 05 March 2026 00:44:49 +0000 (0:00:00.132) 0:00:09.816 ******** 2026-03-05 00:44:54.182758 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182764 | orchestrator | 2026-03-05 00:44:54.182769 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-05 00:44:54.182775 | orchestrator | Thursday 05 March 2026 00:44:49 +0000 (0:00:00.130) 0:00:09.947 ******** 2026-03-05 00:44:54.182780 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:44:54.182786 | orchestrator | 2026-03-05 00:44:54.182792 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-05 00:44:54.182797 | orchestrator | Thursday 05 March 2026 00:44:49 +0000 (0:00:00.136) 0:00:10.083 ******** 2026-03-05 00:44:54.182803 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e61642d-a609-5f4c-883e-a16b698ed397'}}) 2026-03-05 00:44:54.182809 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a9c38f8-c56f-5625-8ade-2e45962405d2'}}) 2026-03-05 00:44:54.182814 | orchestrator | 2026-03-05 00:44:54.182820 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-05 00:44:54.182825 | orchestrator | Thursday 05 March 2026 00:44:49 +0000 (0:00:00.183) 0:00:10.266 ******** 2026-03-05 00:44:54.182845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e61642d-a609-5f4c-883e-a16b698ed397'}})  2026-03-05 00:44:54.182856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a9c38f8-c56f-5625-8ade-2e45962405d2'}})  2026-03-05 00:44:54.182862 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182867 | orchestrator | 2026-03-05 00:44:54.182873 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-05 00:44:54.182878 | orchestrator | Thursday 05 March 2026 00:44:49 +0000 (0:00:00.151) 0:00:10.417 ******** 2026-03-05 00:44:54.182883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e61642d-a609-5f4c-883e-a16b698ed397'}})  2026-03-05 00:44:54.182889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a9c38f8-c56f-5625-8ade-2e45962405d2'}})  2026-03-05 00:44:54.182894 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182900 | orchestrator | 2026-03-05 00:44:54.182905 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-05 00:44:54.182912 | orchestrator | Thursday 05 March 2026 00:44:50 +0000 (0:00:00.341) 0:00:10.759 ******** 2026-03-05 00:44:54.182918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e61642d-a609-5f4c-883e-a16b698ed397'}})  2026-03-05 00:44:54.182936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a9c38f8-c56f-5625-8ade-2e45962405d2'}})  2026-03-05 00:44:54.182943 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.182950 | orchestrator | 2026-03-05 00:44:54.182956 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-05 00:44:54.182966 | orchestrator | Thursday 05 March 2026 00:44:50 +0000 (0:00:00.155) 0:00:10.914 ******** 2026-03-05 00:44:54.182973 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:44:54.182979 | orchestrator | 2026-03-05 00:44:54.182986 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-05 00:44:54.182992 | orchestrator | Thursday 05 March 2026 00:44:50 +0000 (0:00:00.133) 0:00:11.047 ******** 2026-03-05 00:44:54.182999 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:44:54.183033 | orchestrator | 2026-03-05 00:44:54.183040 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-05 00:44:54.183046 | orchestrator | Thursday 05 March 2026 00:44:50 +0000 (0:00:00.139) 0:00:11.187 ******** 2026-03-05 00:44:54.183052 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.183059 | orchestrator | 2026-03-05 00:44:54.183066 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-05 00:44:54.183072 | orchestrator | Thursday 05 March 2026 00:44:50 +0000 (0:00:00.147) 0:00:11.335 ******** 2026-03-05 00:44:54.183084 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.183091 | orchestrator | 2026-03-05 00:44:54.183098 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-05 00:44:54.183104 | orchestrator | Thursday 05 March 2026 00:44:50 +0000 (0:00:00.136) 0:00:11.472 ******** 2026-03-05 00:44:54.183111 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.183117 | orchestrator | 2026-03-05 00:44:54.183123 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-05 00:44:54.183130 | orchestrator | Thursday 05 March 2026 00:44:50 +0000 (0:00:00.131) 0:00:11.603 ******** 2026-03-05 00:44:54.183136 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:44:54.183142 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:44:54.183149 | orchestrator |  "sdb": { 2026-03-05 00:44:54.183156 | orchestrator |  "osd_lvm_uuid": "8e61642d-a609-5f4c-883e-a16b698ed397" 2026-03-05 00:44:54.183162 | orchestrator |  }, 2026-03-05 00:44:54.183168 | orchestrator |  "sdc": { 2026-03-05 00:44:54.183175 | orchestrator |  "osd_lvm_uuid": "1a9c38f8-c56f-5625-8ade-2e45962405d2" 2026-03-05 00:44:54.183181 | orchestrator |  } 2026-03-05 00:44:54.183187 | orchestrator |  } 2026-03-05 00:44:54.183193 | orchestrator | } 2026-03-05 00:44:54.183200 | orchestrator | 2026-03-05 00:44:54.183207 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-05 00:44:54.183213 | orchestrator | Thursday 05 March 2026 00:44:51 +0000 (0:00:00.136) 0:00:11.740 ******** 2026-03-05 00:44:54.183220 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.183226 | orchestrator | 2026-03-05 00:44:54.183233 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-05 00:44:54.183239 | orchestrator | Thursday 05 March 2026 00:44:51 +0000 (0:00:00.146) 0:00:11.887 ******** 2026-03-05 00:44:54.183246 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.183253 | orchestrator | 2026-03-05 00:44:54.183258 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-05 00:44:54.183264 | orchestrator | Thursday 05 March 2026 00:44:51 +0000 (0:00:00.141) 0:00:12.028 ******** 2026-03-05 00:44:54.183269 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:44:54.183274 | orchestrator | 2026-03-05 00:44:54.183280 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-05 00:44:54.183285 | orchestrator | Thursday 05 March 2026 00:44:51 +0000 (0:00:00.142) 0:00:12.171 ******** 2026-03-05 00:44:54.183291 | orchestrator | changed: [testbed-node-3] => { 2026-03-05 00:44:54.183296 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-05 00:44:54.183302 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:44:54.183308 | orchestrator |  "sdb": { 2026-03-05 00:44:54.183313 | orchestrator |  "osd_lvm_uuid": "8e61642d-a609-5f4c-883e-a16b698ed397" 2026-03-05 00:44:54.183319 | orchestrator |  }, 2026-03-05 00:44:54.183324 | orchestrator |  "sdc": { 2026-03-05 00:44:54.183330 | orchestrator |  "osd_lvm_uuid": "1a9c38f8-c56f-5625-8ade-2e45962405d2" 2026-03-05 00:44:54.183335 | orchestrator |  } 2026-03-05 00:44:54.183340 | orchestrator |  }, 2026-03-05 00:44:54.183346 | orchestrator |  "lvm_volumes": [ 2026-03-05 00:44:54.183351 | orchestrator |  { 2026-03-05 00:44:54.183357 | orchestrator |  "data": "osd-block-8e61642d-a609-5f4c-883e-a16b698ed397", 2026-03-05 00:44:54.183362 | orchestrator |  "data_vg": "ceph-8e61642d-a609-5f4c-883e-a16b698ed397" 2026-03-05 00:44:54.183368 | orchestrator |  }, 2026-03-05 00:44:54.183373 | orchestrator |  { 2026-03-05 00:44:54.183379 | orchestrator |  "data": "osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2", 2026-03-05 00:44:54.183384 | orchestrator |  "data_vg": "ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2" 2026-03-05 00:44:54.183393 | orchestrator |  } 2026-03-05 00:44:54.183399 | orchestrator |  ] 2026-03-05 00:44:54.183404 | orchestrator |  } 2026-03-05 00:44:54.183410 | orchestrator | } 2026-03-05 00:44:54.183419 | orchestrator | 2026-03-05 00:44:54.183425 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-05 00:44:54.183430 | orchestrator | Thursday 05 March 2026 00:44:51 +0000 (0:00:00.394) 0:00:12.566 ******** 2026-03-05 00:44:54.183436 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-05 00:44:54.183441 | orchestrator | 2026-03-05 00:44:54.183447 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-05 00:44:54.183452 | orchestrator | 2026-03-05 00:44:54.183457 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:44:54.183463 | orchestrator | Thursday 05 March 2026 00:44:53 +0000 (0:00:01.784) 0:00:14.350 ******** 2026-03-05 00:44:54.183468 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-05 00:44:54.183473 | orchestrator | 2026-03-05 00:44:54.183479 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:44:54.183484 | orchestrator | Thursday 05 March 2026 00:44:53 +0000 (0:00:00.256) 0:00:14.606 ******** 2026-03-05 00:44:54.183490 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:44:54.183495 | orchestrator | 2026-03-05 00:44:54.183504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.709685 | orchestrator | Thursday 05 March 2026 00:44:54 +0000 (0:00:00.252) 0:00:14.859 ******** 2026-03-05 00:45:02.709797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-05 00:45:02.709812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-05 00:45:02.709824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-05 00:45:02.709834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-05 00:45:02.709843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-05 00:45:02.709853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-05 00:45:02.709863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-05 00:45:02.709873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-05 00:45:02.709884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-05 00:45:02.709902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-05 00:45:02.709919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-05 00:45:02.709935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-05 00:45:02.709957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-05 00:45:02.710143 | orchestrator | 2026-03-05 00:45:02.710166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710176 | orchestrator | Thursday 05 March 2026 00:44:54 +0000 (0:00:00.364) 0:00:15.224 ******** 2026-03-05 00:45:02.710186 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.710198 | orchestrator | 2026-03-05 00:45:02.710217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710235 | orchestrator | Thursday 05 March 2026 00:44:54 +0000 (0:00:00.209) 0:00:15.433 ******** 2026-03-05 00:45:02.710306 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.710320 | orchestrator | 2026-03-05 00:45:02.710333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710345 | orchestrator | Thursday 05 March 2026 00:44:54 +0000 (0:00:00.211) 0:00:15.644 ******** 2026-03-05 00:45:02.710356 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.710368 | orchestrator | 2026-03-05 00:45:02.710381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710392 | orchestrator | Thursday 05 March 2026 00:44:55 +0000 (0:00:00.194) 0:00:15.838 ******** 2026-03-05 00:45:02.710429 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.710441 | orchestrator | 2026-03-05 00:45:02.710454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710466 | orchestrator | Thursday 05 March 2026 00:44:55 +0000 (0:00:00.212) 0:00:16.050 ******** 2026-03-05 00:45:02.710478 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.710488 | orchestrator | 2026-03-05 00:45:02.710497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710507 | orchestrator | Thursday 05 March 2026 00:44:56 +0000 (0:00:00.729) 0:00:16.780 ******** 2026-03-05 00:45:02.710525 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.710546 | orchestrator | 2026-03-05 00:45:02.710597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710609 | orchestrator | Thursday 05 March 2026 00:44:56 +0000 (0:00:00.207) 0:00:16.988 ******** 2026-03-05 00:45:02.710619 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.710628 | orchestrator | 2026-03-05 00:45:02.710638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710648 | orchestrator | Thursday 05 March 2026 00:44:56 +0000 (0:00:00.210) 0:00:17.198 ******** 2026-03-05 00:45:02.710658 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.710668 | orchestrator | 2026-03-05 00:45:02.710677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710687 | orchestrator | Thursday 05 March 2026 00:44:56 +0000 (0:00:00.206) 0:00:17.405 ******** 2026-03-05 00:45:02.710697 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6) 2026-03-05 00:45:02.710708 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6) 2026-03-05 00:45:02.710717 | orchestrator | 2026-03-05 00:45:02.710727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710737 | orchestrator | Thursday 05 March 2026 00:44:57 +0000 (0:00:00.444) 0:00:17.850 ******** 2026-03-05 00:45:02.710746 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980) 2026-03-05 00:45:02.710756 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980) 2026-03-05 00:45:02.710766 | orchestrator | 2026-03-05 00:45:02.710776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710785 | orchestrator | Thursday 05 March 2026 00:44:57 +0000 (0:00:00.477) 0:00:18.328 ******** 2026-03-05 00:45:02.710795 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b) 2026-03-05 00:45:02.710805 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b) 2026-03-05 00:45:02.710814 | orchestrator | 2026-03-05 00:45:02.710824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710853 | orchestrator | Thursday 05 March 2026 00:44:58 +0000 (0:00:00.542) 0:00:18.870 ******** 2026-03-05 00:45:02.710863 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5) 2026-03-05 00:45:02.710873 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5) 2026-03-05 00:45:02.710883 | orchestrator | 2026-03-05 00:45:02.710893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:02.710903 | orchestrator | Thursday 05 March 2026 00:44:58 +0000 (0:00:00.550) 0:00:19.421 ******** 2026-03-05 00:45:02.710913 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:45:02.710923 | orchestrator | 2026-03-05 00:45:02.710932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.710942 | orchestrator | Thursday 05 March 2026 00:44:59 +0000 (0:00:00.357) 0:00:19.779 ******** 2026-03-05 00:45:02.710952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-05 00:45:02.711035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-05 00:45:02.711047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-05 00:45:02.711057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-05 00:45:02.711067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-05 00:45:02.711077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-05 00:45:02.711086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-05 00:45:02.711096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-05 00:45:02.711105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-05 00:45:02.711115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-05 00:45:02.711124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-05 00:45:02.711134 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-05 00:45:02.711144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-05 00:45:02.711153 | orchestrator | 2026-03-05 00:45:02.711163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711172 | orchestrator | Thursday 05 March 2026 00:44:59 +0000 (0:00:00.417) 0:00:20.197 ******** 2026-03-05 00:45:02.711182 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.711192 | orchestrator | 2026-03-05 00:45:02.711201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711217 | orchestrator | Thursday 05 March 2026 00:45:00 +0000 (0:00:00.729) 0:00:20.926 ******** 2026-03-05 00:45:02.711227 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.711237 | orchestrator | 2026-03-05 00:45:02.711247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711261 | orchestrator | Thursday 05 March 2026 00:45:00 +0000 (0:00:00.191) 0:00:21.118 ******** 2026-03-05 00:45:02.711281 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.711305 | orchestrator | 2026-03-05 00:45:02.711321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711337 | orchestrator | Thursday 05 March 2026 00:45:00 +0000 (0:00:00.212) 0:00:21.330 ******** 2026-03-05 00:45:02.711352 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.711367 | orchestrator | 2026-03-05 00:45:02.711382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711396 | orchestrator | Thursday 05 March 2026 00:45:00 +0000 (0:00:00.227) 0:00:21.558 ******** 2026-03-05 00:45:02.711411 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.711428 | orchestrator | 2026-03-05 00:45:02.711445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711462 | orchestrator | Thursday 05 March 2026 00:45:01 +0000 (0:00:00.217) 0:00:21.775 ******** 2026-03-05 00:45:02.711478 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.711496 | orchestrator | 2026-03-05 00:45:02.711507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711517 | orchestrator | Thursday 05 March 2026 00:45:01 +0000 (0:00:00.198) 0:00:21.974 ******** 2026-03-05 00:45:02.711527 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.711536 | orchestrator | 2026-03-05 00:45:02.711546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711556 | orchestrator | Thursday 05 March 2026 00:45:01 +0000 (0:00:00.189) 0:00:22.163 ******** 2026-03-05 00:45:02.711565 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:02.711584 | orchestrator | 2026-03-05 00:45:02.711594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711604 | orchestrator | Thursday 05 March 2026 00:45:01 +0000 (0:00:00.197) 0:00:22.360 ******** 2026-03-05 00:45:02.711613 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-05 00:45:02.711624 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-05 00:45:02.711634 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-05 00:45:02.711643 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-05 00:45:02.711653 | orchestrator | 2026-03-05 00:45:02.711663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:02.711673 | orchestrator | Thursday 05 March 2026 00:45:02 +0000 (0:00:00.830) 0:00:23.191 ******** 2026-03-05 00:45:02.711682 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876132 | orchestrator | 2026-03-05 00:45:07.876186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:07.876193 | orchestrator | Thursday 05 March 2026 00:45:02 +0000 (0:00:00.193) 0:00:23.384 ******** 2026-03-05 00:45:07.876198 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876204 | orchestrator | 2026-03-05 00:45:07.876208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:07.876213 | orchestrator | Thursday 05 March 2026 00:45:02 +0000 (0:00:00.181) 0:00:23.565 ******** 2026-03-05 00:45:07.876217 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876222 | orchestrator | 2026-03-05 00:45:07.876226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:07.876230 | orchestrator | Thursday 05 March 2026 00:45:03 +0000 (0:00:00.183) 0:00:23.749 ******** 2026-03-05 00:45:07.876235 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876239 | orchestrator | 2026-03-05 00:45:07.876244 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-05 00:45:07.876248 | orchestrator | Thursday 05 March 2026 00:45:03 +0000 (0:00:00.578) 0:00:24.328 ******** 2026-03-05 00:45:07.876252 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-05 00:45:07.876257 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-05 00:45:07.876261 | orchestrator | 2026-03-05 00:45:07.876265 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-05 00:45:07.876270 | orchestrator | Thursday 05 March 2026 00:45:03 +0000 (0:00:00.118) 0:00:24.447 ******** 2026-03-05 00:45:07.876274 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876278 | orchestrator | 2026-03-05 00:45:07.876283 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-05 00:45:07.876288 | orchestrator | Thursday 05 March 2026 00:45:03 +0000 (0:00:00.112) 0:00:24.559 ******** 2026-03-05 00:45:07.876292 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876296 | orchestrator | 2026-03-05 00:45:07.876301 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-05 00:45:07.876305 | orchestrator | Thursday 05 March 2026 00:45:03 +0000 (0:00:00.109) 0:00:24.668 ******** 2026-03-05 00:45:07.876310 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876314 | orchestrator | 2026-03-05 00:45:07.876318 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-05 00:45:07.876323 | orchestrator | Thursday 05 March 2026 00:45:04 +0000 (0:00:00.121) 0:00:24.789 ******** 2026-03-05 00:45:07.876327 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:45:07.876332 | orchestrator | 2026-03-05 00:45:07.876337 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-05 00:45:07.876341 | orchestrator | Thursday 05 March 2026 00:45:04 +0000 (0:00:00.116) 0:00:24.906 ******** 2026-03-05 00:45:07.876346 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '487cf15b-a3c4-55bb-8565-d1e78d85d824'}}) 2026-03-05 00:45:07.876351 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04f48836-d47d-5181-a61a-7e2c62572595'}}) 2026-03-05 00:45:07.876368 | orchestrator | 2026-03-05 00:45:07.876372 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-05 00:45:07.876377 | orchestrator | Thursday 05 March 2026 00:45:04 +0000 (0:00:00.117) 0:00:25.023 ******** 2026-03-05 00:45:07.876381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '487cf15b-a3c4-55bb-8565-d1e78d85d824'}})  2026-03-05 00:45:07.876394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04f48836-d47d-5181-a61a-7e2c62572595'}})  2026-03-05 00:45:07.876399 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876403 | orchestrator | 2026-03-05 00:45:07.876408 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-05 00:45:07.876412 | orchestrator | Thursday 05 March 2026 00:45:04 +0000 (0:00:00.105) 0:00:25.129 ******** 2026-03-05 00:45:07.876417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '487cf15b-a3c4-55bb-8565-d1e78d85d824'}})  2026-03-05 00:45:07.876421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04f48836-d47d-5181-a61a-7e2c62572595'}})  2026-03-05 00:45:07.876426 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876430 | orchestrator | 2026-03-05 00:45:07.876434 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-05 00:45:07.876439 | orchestrator | Thursday 05 March 2026 00:45:04 +0000 (0:00:00.110) 0:00:25.240 ******** 2026-03-05 00:45:07.876443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '487cf15b-a3c4-55bb-8565-d1e78d85d824'}})  2026-03-05 00:45:07.876448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04f48836-d47d-5181-a61a-7e2c62572595'}})  2026-03-05 00:45:07.876453 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876457 | orchestrator | 2026-03-05 00:45:07.876462 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-05 00:45:07.876466 | orchestrator | Thursday 05 March 2026 00:45:04 +0000 (0:00:00.107) 0:00:25.347 ******** 2026-03-05 00:45:07.876471 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:45:07.876475 | orchestrator | 2026-03-05 00:45:07.876480 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-05 00:45:07.876484 | orchestrator | Thursday 05 March 2026 00:45:04 +0000 (0:00:00.097) 0:00:25.444 ******** 2026-03-05 00:45:07.876489 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:45:07.876493 | orchestrator | 2026-03-05 00:45:07.876498 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-05 00:45:07.876502 | orchestrator | Thursday 05 March 2026 00:45:04 +0000 (0:00:00.094) 0:00:25.538 ******** 2026-03-05 00:45:07.876515 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876520 | orchestrator | 2026-03-05 00:45:07.876524 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-05 00:45:07.876529 | orchestrator | Thursday 05 March 2026 00:45:05 +0000 (0:00:00.231) 0:00:25.770 ******** 2026-03-05 00:45:07.876533 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876537 | orchestrator | 2026-03-05 00:45:07.876542 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-05 00:45:07.876546 | orchestrator | Thursday 05 March 2026 00:45:05 +0000 (0:00:00.111) 0:00:25.881 ******** 2026-03-05 00:45:07.876550 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876555 | orchestrator | 2026-03-05 00:45:07.876559 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-05 00:45:07.876563 | orchestrator | Thursday 05 March 2026 00:45:05 +0000 (0:00:00.101) 0:00:25.982 ******** 2026-03-05 00:45:07.876568 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:45:07.876572 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:45:07.876576 | orchestrator |  "sdb": { 2026-03-05 00:45:07.876581 | orchestrator |  "osd_lvm_uuid": "487cf15b-a3c4-55bb-8565-d1e78d85d824" 2026-03-05 00:45:07.876585 | orchestrator |  }, 2026-03-05 00:45:07.876594 | orchestrator |  "sdc": { 2026-03-05 00:45:07.876598 | orchestrator |  "osd_lvm_uuid": "04f48836-d47d-5181-a61a-7e2c62572595" 2026-03-05 00:45:07.876603 | orchestrator |  } 2026-03-05 00:45:07.876607 | orchestrator |  } 2026-03-05 00:45:07.876612 | orchestrator | } 2026-03-05 00:45:07.876616 | orchestrator | 2026-03-05 00:45:07.876620 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-05 00:45:07.876625 | orchestrator | Thursday 05 March 2026 00:45:05 +0000 (0:00:00.107) 0:00:26.090 ******** 2026-03-05 00:45:07.876629 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876637 | orchestrator | 2026-03-05 00:45:07.876645 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-05 00:45:07.876652 | orchestrator | Thursday 05 March 2026 00:45:05 +0000 (0:00:00.101) 0:00:26.192 ******** 2026-03-05 00:45:07.876660 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876667 | orchestrator | 2026-03-05 00:45:07.876674 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-05 00:45:07.876682 | orchestrator | Thursday 05 March 2026 00:45:05 +0000 (0:00:00.105) 0:00:26.298 ******** 2026-03-05 00:45:07.876690 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:45:07.876697 | orchestrator | 2026-03-05 00:45:07.876705 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-05 00:45:07.876713 | orchestrator | Thursday 05 March 2026 00:45:05 +0000 (0:00:00.103) 0:00:26.402 ******** 2026-03-05 00:45:07.876721 | orchestrator | changed: [testbed-node-4] => { 2026-03-05 00:45:07.876729 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-05 00:45:07.876738 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:45:07.876744 | orchestrator |  "sdb": { 2026-03-05 00:45:07.876749 | orchestrator |  "osd_lvm_uuid": "487cf15b-a3c4-55bb-8565-d1e78d85d824" 2026-03-05 00:45:07.876755 | orchestrator |  }, 2026-03-05 00:45:07.876760 | orchestrator |  "sdc": { 2026-03-05 00:45:07.876765 | orchestrator |  "osd_lvm_uuid": "04f48836-d47d-5181-a61a-7e2c62572595" 2026-03-05 00:45:07.876770 | orchestrator |  } 2026-03-05 00:45:07.876776 | orchestrator |  }, 2026-03-05 00:45:07.876781 | orchestrator |  "lvm_volumes": [ 2026-03-05 00:45:07.876786 | orchestrator |  { 2026-03-05 00:45:07.876791 | orchestrator |  "data": "osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824", 2026-03-05 00:45:07.876797 | orchestrator |  "data_vg": "ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824" 2026-03-05 00:45:07.876802 | orchestrator |  }, 2026-03-05 00:45:07.876807 | orchestrator |  { 2026-03-05 00:45:07.876812 | orchestrator |  "data": "osd-block-04f48836-d47d-5181-a61a-7e2c62572595", 2026-03-05 00:45:07.876817 | orchestrator |  "data_vg": "ceph-04f48836-d47d-5181-a61a-7e2c62572595" 2026-03-05 00:45:07.876822 | orchestrator |  } 2026-03-05 00:45:07.876828 | orchestrator |  ] 2026-03-05 00:45:07.876832 | orchestrator |  } 2026-03-05 00:45:07.876837 | orchestrator | } 2026-03-05 00:45:07.876843 | orchestrator | 2026-03-05 00:45:07.876848 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-05 00:45:07.876853 | orchestrator | Thursday 05 March 2026 00:45:05 +0000 (0:00:00.172) 0:00:26.575 ******** 2026-03-05 00:45:07.876859 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-05 00:45:07.876864 | orchestrator | 2026-03-05 00:45:07.876869 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-05 00:45:07.876874 | orchestrator | 2026-03-05 00:45:07.876879 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:45:07.876884 | orchestrator | Thursday 05 March 2026 00:45:06 +0000 (0:00:00.915) 0:00:27.490 ******** 2026-03-05 00:45:07.876889 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-05 00:45:07.876894 | orchestrator | 2026-03-05 00:45:07.876900 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:45:07.876913 | orchestrator | Thursday 05 March 2026 00:45:07 +0000 (0:00:00.510) 0:00:28.000 ******** 2026-03-05 00:45:07.876919 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:45:07.876924 | orchestrator | 2026-03-05 00:45:07.876929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:07.876934 | orchestrator | Thursday 05 March 2026 00:45:07 +0000 (0:00:00.207) 0:00:28.207 ******** 2026-03-05 00:45:07.876939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-05 00:45:07.876944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-05 00:45:07.876949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-05 00:45:07.876954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-05 00:45:07.876959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-05 00:45:07.876969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-05 00:45:15.575848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-05 00:45:15.576003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-05 00:45:15.576020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-05 00:45:15.576032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-05 00:45:15.576044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-05 00:45:15.576065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-05 00:45:15.576077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-05 00:45:15.576089 | orchestrator | 2026-03-05 00:45:15.576101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576113 | orchestrator | Thursday 05 March 2026 00:45:07 +0000 (0:00:00.344) 0:00:28.552 ******** 2026-03-05 00:45:15.576124 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.576136 | orchestrator | 2026-03-05 00:45:15.576147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576158 | orchestrator | Thursday 05 March 2026 00:45:08 +0000 (0:00:00.185) 0:00:28.737 ******** 2026-03-05 00:45:15.576169 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.576180 | orchestrator | 2026-03-05 00:45:15.576191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576202 | orchestrator | Thursday 05 March 2026 00:45:08 +0000 (0:00:00.191) 0:00:28.928 ******** 2026-03-05 00:45:15.576212 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.576223 | orchestrator | 2026-03-05 00:45:15.576234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576245 | orchestrator | Thursday 05 March 2026 00:45:08 +0000 (0:00:00.193) 0:00:29.122 ******** 2026-03-05 00:45:15.576256 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.576267 | orchestrator | 2026-03-05 00:45:15.576278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576289 | orchestrator | Thursday 05 March 2026 00:45:08 +0000 (0:00:00.168) 0:00:29.291 ******** 2026-03-05 00:45:15.576299 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.576310 | orchestrator | 2026-03-05 00:45:15.576321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576332 | orchestrator | Thursday 05 March 2026 00:45:08 +0000 (0:00:00.202) 0:00:29.493 ******** 2026-03-05 00:45:15.576343 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.576353 | orchestrator | 2026-03-05 00:45:15.576364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576375 | orchestrator | Thursday 05 March 2026 00:45:09 +0000 (0:00:00.194) 0:00:29.688 ******** 2026-03-05 00:45:15.576411 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.576425 | orchestrator | 2026-03-05 00:45:15.576438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576451 | orchestrator | Thursday 05 March 2026 00:45:09 +0000 (0:00:00.164) 0:00:29.853 ******** 2026-03-05 00:45:15.576463 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.576476 | orchestrator | 2026-03-05 00:45:15.576489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576502 | orchestrator | Thursday 05 March 2026 00:45:09 +0000 (0:00:00.176) 0:00:30.029 ******** 2026-03-05 00:45:15.576515 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90) 2026-03-05 00:45:15.576527 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90) 2026-03-05 00:45:15.576540 | orchestrator | 2026-03-05 00:45:15.576554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576567 | orchestrator | Thursday 05 March 2026 00:45:10 +0000 (0:00:00.723) 0:00:30.753 ******** 2026-03-05 00:45:15.576580 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27) 2026-03-05 00:45:15.576594 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27) 2026-03-05 00:45:15.576606 | orchestrator | 2026-03-05 00:45:15.576619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576631 | orchestrator | Thursday 05 March 2026 00:45:10 +0000 (0:00:00.383) 0:00:31.136 ******** 2026-03-05 00:45:15.576644 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5) 2026-03-05 00:45:15.576657 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5) 2026-03-05 00:45:15.576670 | orchestrator | 2026-03-05 00:45:15.576682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576696 | orchestrator | Thursday 05 March 2026 00:45:10 +0000 (0:00:00.392) 0:00:31.529 ******** 2026-03-05 00:45:15.576709 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1) 2026-03-05 00:45:15.576723 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1) 2026-03-05 00:45:15.576736 | orchestrator | 2026-03-05 00:45:15.576748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:45:15.576758 | orchestrator | Thursday 05 March 2026 00:45:11 +0000 (0:00:00.380) 0:00:31.909 ******** 2026-03-05 00:45:15.576769 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:45:15.576780 | orchestrator | 2026-03-05 00:45:15.576791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.576819 | orchestrator | Thursday 05 March 2026 00:45:11 +0000 (0:00:00.293) 0:00:32.202 ******** 2026-03-05 00:45:15.576831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-05 00:45:15.576842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-05 00:45:15.576852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-05 00:45:15.576863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-05 00:45:15.576874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-05 00:45:15.576902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-05 00:45:15.576914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-05 00:45:15.576926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-05 00:45:15.577010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-05 00:45:15.577023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-05 00:45:15.577033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-05 00:45:15.577044 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-05 00:45:15.577055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-05 00:45:15.577066 | orchestrator | 2026-03-05 00:45:15.577077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577088 | orchestrator | Thursday 05 March 2026 00:45:11 +0000 (0:00:00.350) 0:00:32.552 ******** 2026-03-05 00:45:15.577099 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577110 | orchestrator | 2026-03-05 00:45:15.577121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577131 | orchestrator | Thursday 05 March 2026 00:45:12 +0000 (0:00:00.203) 0:00:32.756 ******** 2026-03-05 00:45:15.577142 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577153 | orchestrator | 2026-03-05 00:45:15.577164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577175 | orchestrator | Thursday 05 March 2026 00:45:12 +0000 (0:00:00.201) 0:00:32.957 ******** 2026-03-05 00:45:15.577191 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577203 | orchestrator | 2026-03-05 00:45:15.577214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577224 | orchestrator | Thursday 05 March 2026 00:45:12 +0000 (0:00:00.269) 0:00:33.226 ******** 2026-03-05 00:45:15.577235 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577246 | orchestrator | 2026-03-05 00:45:15.577257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577267 | orchestrator | Thursday 05 March 2026 00:45:12 +0000 (0:00:00.255) 0:00:33.482 ******** 2026-03-05 00:45:15.577278 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577289 | orchestrator | 2026-03-05 00:45:15.577300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577310 | orchestrator | Thursday 05 March 2026 00:45:13 +0000 (0:00:00.273) 0:00:33.756 ******** 2026-03-05 00:45:15.577321 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577332 | orchestrator | 2026-03-05 00:45:15.577342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577353 | orchestrator | Thursday 05 March 2026 00:45:13 +0000 (0:00:00.671) 0:00:34.427 ******** 2026-03-05 00:45:15.577364 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577375 | orchestrator | 2026-03-05 00:45:15.577385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577396 | orchestrator | Thursday 05 March 2026 00:45:13 +0000 (0:00:00.207) 0:00:34.635 ******** 2026-03-05 00:45:15.577407 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577418 | orchestrator | 2026-03-05 00:45:15.577428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577439 | orchestrator | Thursday 05 March 2026 00:45:14 +0000 (0:00:00.238) 0:00:34.873 ******** 2026-03-05 00:45:15.577450 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-05 00:45:15.577461 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-05 00:45:15.577472 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-05 00:45:15.577482 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-05 00:45:15.577493 | orchestrator | 2026-03-05 00:45:15.577504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577514 | orchestrator | Thursday 05 March 2026 00:45:14 +0000 (0:00:00.605) 0:00:35.479 ******** 2026-03-05 00:45:15.577525 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577536 | orchestrator | 2026-03-05 00:45:15.577554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577565 | orchestrator | Thursday 05 March 2026 00:45:14 +0000 (0:00:00.188) 0:00:35.667 ******** 2026-03-05 00:45:15.577576 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577587 | orchestrator | 2026-03-05 00:45:15.577597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577608 | orchestrator | Thursday 05 March 2026 00:45:15 +0000 (0:00:00.185) 0:00:35.853 ******** 2026-03-05 00:45:15.577619 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577630 | orchestrator | 2026-03-05 00:45:15.577640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:45:15.577651 | orchestrator | Thursday 05 March 2026 00:45:15 +0000 (0:00:00.194) 0:00:36.048 ******** 2026-03-05 00:45:15.577662 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:15.577673 | orchestrator | 2026-03-05 00:45:15.577690 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-05 00:45:20.593565 | orchestrator | Thursday 05 March 2026 00:45:15 +0000 (0:00:00.204) 0:00:36.253 ******** 2026-03-05 00:45:20.593696 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-05 00:45:20.593711 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-05 00:45:20.593722 | orchestrator | 2026-03-05 00:45:20.593746 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-05 00:45:20.593791 | orchestrator | Thursday 05 March 2026 00:45:15 +0000 (0:00:00.180) 0:00:36.434 ******** 2026-03-05 00:45:20.593802 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.593812 | orchestrator | 2026-03-05 00:45:20.593821 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-05 00:45:20.593830 | orchestrator | Thursday 05 March 2026 00:45:15 +0000 (0:00:00.152) 0:00:36.586 ******** 2026-03-05 00:45:20.593839 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.593848 | orchestrator | 2026-03-05 00:45:20.593857 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-05 00:45:20.593866 | orchestrator | Thursday 05 March 2026 00:45:16 +0000 (0:00:00.146) 0:00:36.733 ******** 2026-03-05 00:45:20.593875 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.593883 | orchestrator | 2026-03-05 00:45:20.593892 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-05 00:45:20.593901 | orchestrator | Thursday 05 March 2026 00:45:16 +0000 (0:00:00.489) 0:00:37.222 ******** 2026-03-05 00:45:20.593910 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:45:20.593920 | orchestrator | 2026-03-05 00:45:20.593929 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-05 00:45:20.593939 | orchestrator | Thursday 05 March 2026 00:45:16 +0000 (0:00:00.256) 0:00:37.478 ******** 2026-03-05 00:45:20.593948 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb27c3c1-5e00-588a-af48-66c3e9a20c72'}}) 2026-03-05 00:45:20.593958 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52eeae7c-0ac3-5716-aafe-40e466221a22'}}) 2026-03-05 00:45:20.593967 | orchestrator | 2026-03-05 00:45:20.593995 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-05 00:45:20.594005 | orchestrator | Thursday 05 March 2026 00:45:17 +0000 (0:00:00.311) 0:00:37.790 ******** 2026-03-05 00:45:20.594050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb27c3c1-5e00-588a-af48-66c3e9a20c72'}})  2026-03-05 00:45:20.594063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52eeae7c-0ac3-5716-aafe-40e466221a22'}})  2026-03-05 00:45:20.594072 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.594081 | orchestrator | 2026-03-05 00:45:20.594090 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-05 00:45:20.594101 | orchestrator | Thursday 05 March 2026 00:45:17 +0000 (0:00:00.255) 0:00:38.045 ******** 2026-03-05 00:45:20.594112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb27c3c1-5e00-588a-af48-66c3e9a20c72'}})  2026-03-05 00:45:20.594143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52eeae7c-0ac3-5716-aafe-40e466221a22'}})  2026-03-05 00:45:20.594153 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.594164 | orchestrator | 2026-03-05 00:45:20.594175 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-05 00:45:20.594185 | orchestrator | Thursday 05 March 2026 00:45:17 +0000 (0:00:00.191) 0:00:38.236 ******** 2026-03-05 00:45:20.594210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb27c3c1-5e00-588a-af48-66c3e9a20c72'}})  2026-03-05 00:45:20.594221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52eeae7c-0ac3-5716-aafe-40e466221a22'}})  2026-03-05 00:45:20.594232 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.594242 | orchestrator | 2026-03-05 00:45:20.594252 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-05 00:45:20.594262 | orchestrator | Thursday 05 March 2026 00:45:17 +0000 (0:00:00.143) 0:00:38.379 ******** 2026-03-05 00:45:20.594272 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:45:20.594283 | orchestrator | 2026-03-05 00:45:20.594292 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-05 00:45:20.594303 | orchestrator | Thursday 05 March 2026 00:45:17 +0000 (0:00:00.137) 0:00:38.517 ******** 2026-03-05 00:45:20.594313 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:45:20.594323 | orchestrator | 2026-03-05 00:45:20.594332 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-05 00:45:20.594343 | orchestrator | Thursday 05 March 2026 00:45:17 +0000 (0:00:00.132) 0:00:38.649 ******** 2026-03-05 00:45:20.594353 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.594363 | orchestrator | 2026-03-05 00:45:20.594373 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-05 00:45:20.594383 | orchestrator | Thursday 05 March 2026 00:45:18 +0000 (0:00:00.123) 0:00:38.773 ******** 2026-03-05 00:45:20.594393 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.594404 | orchestrator | 2026-03-05 00:45:20.594414 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-05 00:45:20.594425 | orchestrator | Thursday 05 March 2026 00:45:18 +0000 (0:00:00.127) 0:00:38.901 ******** 2026-03-05 00:45:20.594435 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.594445 | orchestrator | 2026-03-05 00:45:20.594456 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-05 00:45:20.594466 | orchestrator | Thursday 05 March 2026 00:45:18 +0000 (0:00:00.131) 0:00:39.033 ******** 2026-03-05 00:45:20.594475 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:45:20.594484 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:45:20.594493 | orchestrator |  "sdb": { 2026-03-05 00:45:20.594517 | orchestrator |  "osd_lvm_uuid": "bb27c3c1-5e00-588a-af48-66c3e9a20c72" 2026-03-05 00:45:20.594527 | orchestrator |  }, 2026-03-05 00:45:20.594536 | orchestrator |  "sdc": { 2026-03-05 00:45:20.594544 | orchestrator |  "osd_lvm_uuid": "52eeae7c-0ac3-5716-aafe-40e466221a22" 2026-03-05 00:45:20.594553 | orchestrator |  } 2026-03-05 00:45:20.594562 | orchestrator |  } 2026-03-05 00:45:20.594571 | orchestrator | } 2026-03-05 00:45:20.594580 | orchestrator | 2026-03-05 00:45:20.594589 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-05 00:45:20.594597 | orchestrator | Thursday 05 March 2026 00:45:18 +0000 (0:00:00.136) 0:00:39.169 ******** 2026-03-05 00:45:20.594606 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.594614 | orchestrator | 2026-03-05 00:45:20.594623 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-05 00:45:20.594631 | orchestrator | Thursday 05 March 2026 00:45:18 +0000 (0:00:00.352) 0:00:39.521 ******** 2026-03-05 00:45:20.594640 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.594706 | orchestrator | 2026-03-05 00:45:20.594716 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-05 00:45:20.594724 | orchestrator | Thursday 05 March 2026 00:45:18 +0000 (0:00:00.139) 0:00:39.661 ******** 2026-03-05 00:45:20.594733 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:45:20.594741 | orchestrator | 2026-03-05 00:45:20.594750 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-05 00:45:20.594759 | orchestrator | Thursday 05 March 2026 00:45:19 +0000 (0:00:00.133) 0:00:39.794 ******** 2026-03-05 00:45:20.594767 | orchestrator | changed: [testbed-node-5] => { 2026-03-05 00:45:20.594776 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-05 00:45:20.594785 | orchestrator |  "ceph_osd_devices": { 2026-03-05 00:45:20.594794 | orchestrator |  "sdb": { 2026-03-05 00:45:20.594802 | orchestrator |  "osd_lvm_uuid": "bb27c3c1-5e00-588a-af48-66c3e9a20c72" 2026-03-05 00:45:20.594811 | orchestrator |  }, 2026-03-05 00:45:20.594820 | orchestrator |  "sdc": { 2026-03-05 00:45:20.594829 | orchestrator |  "osd_lvm_uuid": "52eeae7c-0ac3-5716-aafe-40e466221a22" 2026-03-05 00:45:20.594837 | orchestrator |  } 2026-03-05 00:45:20.594846 | orchestrator |  }, 2026-03-05 00:45:20.594854 | orchestrator |  "lvm_volumes": [ 2026-03-05 00:45:20.594863 | orchestrator |  { 2026-03-05 00:45:20.594872 | orchestrator |  "data": "osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72", 2026-03-05 00:45:20.594881 | orchestrator |  "data_vg": "ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72" 2026-03-05 00:45:20.594889 | orchestrator |  }, 2026-03-05 00:45:20.594898 | orchestrator |  { 2026-03-05 00:45:20.594907 | orchestrator |  "data": "osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22", 2026-03-05 00:45:20.594915 | orchestrator |  "data_vg": "ceph-52eeae7c-0ac3-5716-aafe-40e466221a22" 2026-03-05 00:45:20.594924 | orchestrator |  } 2026-03-05 00:45:20.594933 | orchestrator |  ] 2026-03-05 00:45:20.594946 | orchestrator |  } 2026-03-05 00:45:20.594954 | orchestrator | } 2026-03-05 00:45:20.594963 | orchestrator | 2026-03-05 00:45:20.594972 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-05 00:45:20.594998 | orchestrator | Thursday 05 March 2026 00:45:19 +0000 (0:00:00.197) 0:00:39.992 ******** 2026-03-05 00:45:20.595007 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-05 00:45:20.595016 | orchestrator | 2026-03-05 00:45:20.595025 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:45:20.595033 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 00:45:20.595044 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 00:45:20.595053 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 00:45:20.595062 | orchestrator | 2026-03-05 00:45:20.595070 | orchestrator | 2026-03-05 00:45:20.595079 | orchestrator | 2026-03-05 00:45:20.595088 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:45:20.595096 | orchestrator | Thursday 05 March 2026 00:45:20 +0000 (0:00:01.262) 0:00:41.255 ******** 2026-03-05 00:45:20.595105 | orchestrator | =============================================================================== 2026-03-05 00:45:20.595114 | orchestrator | Write configuration file ------------------------------------------------ 3.96s 2026-03-05 00:45:20.595122 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2026-03-05 00:45:20.595131 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2026-03-05 00:45:20.595139 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.01s 2026-03-05 00:45:20.595154 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-03-05 00:45:20.595163 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2026-03-05 00:45:20.595172 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-03-05 00:45:20.595180 | orchestrator | Print configuration data ------------------------------------------------ 0.77s 2026-03-05 00:45:20.595189 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.74s 2026-03-05 00:45:20.595197 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-03-05 00:45:20.595206 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-05 00:45:20.595214 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-05 00:45:20.595223 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-03-05 00:45:20.595238 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-03-05 00:45:21.412766 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.64s 2026-03-05 00:45:21.412871 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.61s 2026-03-05 00:45:21.412886 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-03-05 00:45:21.412899 | orchestrator | Print WAL devices ------------------------------------------------------- 0.60s 2026-03-05 00:45:21.412910 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-03-05 00:45:21.412921 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-03-05 00:45:43.821068 | orchestrator | 2026-03-05 00:45:43 | INFO  | Task e48d62fa-c358-44d0-ba6a-0b474ab7f1fe (sync inventory) is running in background. Output coming soon. 2026-03-05 00:46:08.939808 | orchestrator | 2026-03-05 00:45:45 | INFO  | Starting group_vars file reorganization 2026-03-05 00:46:08.939892 | orchestrator | 2026-03-05 00:45:45 | INFO  | Moved 0 file(s) to their respective directories 2026-03-05 00:46:08.939901 | orchestrator | 2026-03-05 00:45:45 | INFO  | Group_vars file reorganization completed 2026-03-05 00:46:08.939908 | orchestrator | 2026-03-05 00:45:47 | INFO  | Starting variable preparation from inventory 2026-03-05 00:46:08.939914 | orchestrator | 2026-03-05 00:45:50 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-05 00:46:08.939920 | orchestrator | 2026-03-05 00:45:50 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-05 00:46:08.939994 | orchestrator | 2026-03-05 00:45:50 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-05 00:46:08.940004 | orchestrator | 2026-03-05 00:45:50 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-05 00:46:08.940010 | orchestrator | 2026-03-05 00:45:50 | INFO  | Variable preparation completed 2026-03-05 00:46:08.940017 | orchestrator | 2026-03-05 00:45:51 | INFO  | Starting inventory overwrite handling 2026-03-05 00:46:08.940023 | orchestrator | 2026-03-05 00:45:51 | INFO  | Handling group overwrites in 99-overwrite 2026-03-05 00:46:08.940032 | orchestrator | 2026-03-05 00:45:51 | INFO  | Removing group frr:children from 60-generic 2026-03-05 00:46:08.940038 | orchestrator | 2026-03-05 00:45:51 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-05 00:46:08.940044 | orchestrator | 2026-03-05 00:45:51 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-05 00:46:08.940050 | orchestrator | 2026-03-05 00:45:51 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-05 00:46:08.940055 | orchestrator | 2026-03-05 00:45:51 | INFO  | Handling group overwrites in 20-roles 2026-03-05 00:46:08.940061 | orchestrator | 2026-03-05 00:45:51 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-05 00:46:08.940084 | orchestrator | 2026-03-05 00:45:51 | INFO  | Removed 5 group(s) in total 2026-03-05 00:46:08.940090 | orchestrator | 2026-03-05 00:45:51 | INFO  | Inventory overwrite handling completed 2026-03-05 00:46:08.940095 | orchestrator | 2026-03-05 00:45:52 | INFO  | Starting merge of inventory files 2026-03-05 00:46:08.940101 | orchestrator | 2026-03-05 00:45:52 | INFO  | Inventory files merged successfully 2026-03-05 00:46:08.940106 | orchestrator | 2026-03-05 00:45:56 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-05 00:46:08.940112 | orchestrator | 2026-03-05 00:46:07 | INFO  | Successfully wrote ClusterShell configuration 2026-03-05 00:46:08.940117 | orchestrator | [master 95465ce] 2026-03-05-00-46 2026-03-05 00:46:08.940125 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-05 00:46:11.335429 | orchestrator | 2026-03-05 00:46:11 | INFO  | Task 2c2c6c91-9e88-4307-9869-b0fc203c5c85 (ceph-create-lvm-devices) was prepared for execution. 2026-03-05 00:46:11.335508 | orchestrator | 2026-03-05 00:46:11 | INFO  | It takes a moment until task 2c2c6c91-9e88-4307-9869-b0fc203c5c85 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-05 00:46:24.702796 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 00:46:24.702883 | orchestrator | 2.16.14 2026-03-05 00:46:24.702895 | orchestrator | 2026-03-05 00:46:24.702904 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-05 00:46:24.702967 | orchestrator | 2026-03-05 00:46:24.702978 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:46:24.702987 | orchestrator | Thursday 05 March 2026 00:46:15 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-03-05 00:46:24.702995 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-05 00:46:24.703004 | orchestrator | 2026-03-05 00:46:24.703012 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:46:24.703020 | orchestrator | Thursday 05 March 2026 00:46:16 +0000 (0:00:00.310) 0:00:00.630 ******** 2026-03-05 00:46:24.703028 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:46:24.703036 | orchestrator | 2026-03-05 00:46:24.703045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703053 | orchestrator | Thursday 05 March 2026 00:46:16 +0000 (0:00:00.285) 0:00:00.916 ******** 2026-03-05 00:46:24.703061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-05 00:46:24.703069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-05 00:46:24.703077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-05 00:46:24.703085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-05 00:46:24.703093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-05 00:46:24.703101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-05 00:46:24.703109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-05 00:46:24.703117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-05 00:46:24.703125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-05 00:46:24.703133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-05 00:46:24.703141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-05 00:46:24.703149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-05 00:46:24.703162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-05 00:46:24.703207 | orchestrator | 2026-03-05 00:46:24.703224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703237 | orchestrator | Thursday 05 March 2026 00:46:17 +0000 (0:00:00.633) 0:00:01.550 ******** 2026-03-05 00:46:24.703250 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.703263 | orchestrator | 2026-03-05 00:46:24.703277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703291 | orchestrator | Thursday 05 March 2026 00:46:17 +0000 (0:00:00.230) 0:00:01.780 ******** 2026-03-05 00:46:24.703305 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.703319 | orchestrator | 2026-03-05 00:46:24.703332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703348 | orchestrator | Thursday 05 March 2026 00:46:17 +0000 (0:00:00.304) 0:00:02.084 ******** 2026-03-05 00:46:24.703363 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.703377 | orchestrator | 2026-03-05 00:46:24.703388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703398 | orchestrator | Thursday 05 March 2026 00:46:17 +0000 (0:00:00.217) 0:00:02.302 ******** 2026-03-05 00:46:24.703407 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.703416 | orchestrator | 2026-03-05 00:46:24.703426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703435 | orchestrator | Thursday 05 March 2026 00:46:18 +0000 (0:00:00.236) 0:00:02.538 ******** 2026-03-05 00:46:24.703445 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.703454 | orchestrator | 2026-03-05 00:46:24.703463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703472 | orchestrator | Thursday 05 March 2026 00:46:18 +0000 (0:00:00.211) 0:00:02.749 ******** 2026-03-05 00:46:24.703481 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.703491 | orchestrator | 2026-03-05 00:46:24.703500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703509 | orchestrator | Thursday 05 March 2026 00:46:18 +0000 (0:00:00.220) 0:00:02.970 ******** 2026-03-05 00:46:24.703519 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.703527 | orchestrator | 2026-03-05 00:46:24.703536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703545 | orchestrator | Thursday 05 March 2026 00:46:18 +0000 (0:00:00.242) 0:00:03.212 ******** 2026-03-05 00:46:24.703555 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.703564 | orchestrator | 2026-03-05 00:46:24.703573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703582 | orchestrator | Thursday 05 March 2026 00:46:19 +0000 (0:00:00.266) 0:00:03.479 ******** 2026-03-05 00:46:24.703604 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb) 2026-03-05 00:46:24.703623 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb) 2026-03-05 00:46:24.703632 | orchestrator | 2026-03-05 00:46:24.703641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703667 | orchestrator | Thursday 05 March 2026 00:46:19 +0000 (0:00:00.426) 0:00:03.905 ******** 2026-03-05 00:46:24.703677 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4) 2026-03-05 00:46:24.703687 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4) 2026-03-05 00:46:24.703697 | orchestrator | 2026-03-05 00:46:24.703706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703714 | orchestrator | Thursday 05 March 2026 00:46:20 +0000 (0:00:00.645) 0:00:04.551 ******** 2026-03-05 00:46:24.703722 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b) 2026-03-05 00:46:24.703730 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b) 2026-03-05 00:46:24.703746 | orchestrator | 2026-03-05 00:46:24.703754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703762 | orchestrator | Thursday 05 March 2026 00:46:21 +0000 (0:00:00.996) 0:00:05.547 ******** 2026-03-05 00:46:24.703772 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada) 2026-03-05 00:46:24.703786 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada) 2026-03-05 00:46:24.703805 | orchestrator | 2026-03-05 00:46:24.703820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:24.703832 | orchestrator | Thursday 05 March 2026 00:46:22 +0000 (0:00:00.969) 0:00:06.517 ******** 2026-03-05 00:46:24.703846 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:46:24.703859 | orchestrator | 2026-03-05 00:46:24.703872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:24.703886 | orchestrator | Thursday 05 March 2026 00:46:22 +0000 (0:00:00.426) 0:00:06.943 ******** 2026-03-05 00:46:24.703900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-05 00:46:24.703937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-05 00:46:24.703948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-05 00:46:24.703971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-05 00:46:24.703979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-05 00:46:24.703987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-05 00:46:24.703995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-05 00:46:24.704002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-05 00:46:24.704010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-05 00:46:24.704018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-05 00:46:24.704025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-05 00:46:24.704037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-05 00:46:24.704045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-05 00:46:24.704053 | orchestrator | 2026-03-05 00:46:24.704060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:24.704068 | orchestrator | Thursday 05 March 2026 00:46:23 +0000 (0:00:00.486) 0:00:07.429 ******** 2026-03-05 00:46:24.704076 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.704084 | orchestrator | 2026-03-05 00:46:24.704092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:24.704100 | orchestrator | Thursday 05 March 2026 00:46:23 +0000 (0:00:00.245) 0:00:07.675 ******** 2026-03-05 00:46:24.704107 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.704115 | orchestrator | 2026-03-05 00:46:24.704123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:24.704131 | orchestrator | Thursday 05 March 2026 00:46:23 +0000 (0:00:00.223) 0:00:07.899 ******** 2026-03-05 00:46:24.704138 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.704146 | orchestrator | 2026-03-05 00:46:24.704156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:24.704169 | orchestrator | Thursday 05 March 2026 00:46:23 +0000 (0:00:00.209) 0:00:08.108 ******** 2026-03-05 00:46:24.704190 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.704214 | orchestrator | 2026-03-05 00:46:24.704228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:24.704242 | orchestrator | Thursday 05 March 2026 00:46:23 +0000 (0:00:00.214) 0:00:08.322 ******** 2026-03-05 00:46:24.704256 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.704266 | orchestrator | 2026-03-05 00:46:24.704274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:24.704282 | orchestrator | Thursday 05 March 2026 00:46:24 +0000 (0:00:00.235) 0:00:08.557 ******** 2026-03-05 00:46:24.704290 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.704298 | orchestrator | 2026-03-05 00:46:24.704305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:24.704313 | orchestrator | Thursday 05 March 2026 00:46:24 +0000 (0:00:00.242) 0:00:08.800 ******** 2026-03-05 00:46:24.704321 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:24.704329 | orchestrator | 2026-03-05 00:46:24.704344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:33.619122 | orchestrator | Thursday 05 March 2026 00:46:24 +0000 (0:00:00.231) 0:00:09.032 ******** 2026-03-05 00:46:33.619219 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619230 | orchestrator | 2026-03-05 00:46:33.619239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:33.619246 | orchestrator | Thursday 05 March 2026 00:46:24 +0000 (0:00:00.199) 0:00:09.232 ******** 2026-03-05 00:46:33.619254 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-05 00:46:33.619263 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-05 00:46:33.619271 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-05 00:46:33.619278 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-05 00:46:33.619286 | orchestrator | 2026-03-05 00:46:33.619293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:33.619301 | orchestrator | Thursday 05 March 2026 00:46:26 +0000 (0:00:01.146) 0:00:10.378 ******** 2026-03-05 00:46:33.619308 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619315 | orchestrator | 2026-03-05 00:46:33.619322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:33.619329 | orchestrator | Thursday 05 March 2026 00:46:26 +0000 (0:00:00.228) 0:00:10.607 ******** 2026-03-05 00:46:33.619335 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619342 | orchestrator | 2026-03-05 00:46:33.619348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:33.619355 | orchestrator | Thursday 05 March 2026 00:46:26 +0000 (0:00:00.220) 0:00:10.827 ******** 2026-03-05 00:46:33.619362 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619369 | orchestrator | 2026-03-05 00:46:33.619377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:33.619384 | orchestrator | Thursday 05 March 2026 00:46:26 +0000 (0:00:00.224) 0:00:11.051 ******** 2026-03-05 00:46:33.619391 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619399 | orchestrator | 2026-03-05 00:46:33.619407 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-05 00:46:33.619414 | orchestrator | Thursday 05 March 2026 00:46:26 +0000 (0:00:00.222) 0:00:11.274 ******** 2026-03-05 00:46:33.619421 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619428 | orchestrator | 2026-03-05 00:46:33.619435 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-05 00:46:33.619443 | orchestrator | Thursday 05 March 2026 00:46:27 +0000 (0:00:00.153) 0:00:11.428 ******** 2026-03-05 00:46:33.619450 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e61642d-a609-5f4c-883e-a16b698ed397'}}) 2026-03-05 00:46:33.619459 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a9c38f8-c56f-5625-8ade-2e45962405d2'}}) 2026-03-05 00:46:33.619466 | orchestrator | 2026-03-05 00:46:33.619473 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-05 00:46:33.619504 | orchestrator | Thursday 05 March 2026 00:46:27 +0000 (0:00:00.205) 0:00:11.633 ******** 2026-03-05 00:46:33.619513 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'}) 2026-03-05 00:46:33.619522 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'}) 2026-03-05 00:46:33.619530 | orchestrator | 2026-03-05 00:46:33.619537 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-05 00:46:33.619545 | orchestrator | Thursday 05 March 2026 00:46:29 +0000 (0:00:02.002) 0:00:13.636 ******** 2026-03-05 00:46:33.619552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:33.619562 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:33.619569 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619576 | orchestrator | 2026-03-05 00:46:33.619583 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-05 00:46:33.619591 | orchestrator | Thursday 05 March 2026 00:46:29 +0000 (0:00:00.152) 0:00:13.788 ******** 2026-03-05 00:46:33.619598 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'}) 2026-03-05 00:46:33.619605 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'}) 2026-03-05 00:46:33.619612 | orchestrator | 2026-03-05 00:46:33.619619 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-05 00:46:33.619627 | orchestrator | Thursday 05 March 2026 00:46:31 +0000 (0:00:01.570) 0:00:15.358 ******** 2026-03-05 00:46:33.619635 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:33.619642 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:33.619649 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619656 | orchestrator | 2026-03-05 00:46:33.619663 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-05 00:46:33.619670 | orchestrator | Thursday 05 March 2026 00:46:31 +0000 (0:00:00.163) 0:00:15.522 ******** 2026-03-05 00:46:33.619704 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619713 | orchestrator | 2026-03-05 00:46:33.619719 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-05 00:46:33.619726 | orchestrator | Thursday 05 March 2026 00:46:31 +0000 (0:00:00.192) 0:00:15.715 ******** 2026-03-05 00:46:33.619733 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:33.619741 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:33.619748 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619756 | orchestrator | 2026-03-05 00:46:33.619763 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-05 00:46:33.619770 | orchestrator | Thursday 05 March 2026 00:46:31 +0000 (0:00:00.517) 0:00:16.233 ******** 2026-03-05 00:46:33.619778 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619785 | orchestrator | 2026-03-05 00:46:33.619792 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-05 00:46:33.619799 | orchestrator | Thursday 05 March 2026 00:46:32 +0000 (0:00:00.167) 0:00:16.400 ******** 2026-03-05 00:46:33.619813 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:33.619820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:33.619828 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619835 | orchestrator | 2026-03-05 00:46:33.619843 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-05 00:46:33.619850 | orchestrator | Thursday 05 March 2026 00:46:32 +0000 (0:00:00.170) 0:00:16.571 ******** 2026-03-05 00:46:33.619857 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619865 | orchestrator | 2026-03-05 00:46:33.619872 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-05 00:46:33.619879 | orchestrator | Thursday 05 March 2026 00:46:32 +0000 (0:00:00.197) 0:00:16.768 ******** 2026-03-05 00:46:33.619887 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:33.619894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:33.619901 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.619926 | orchestrator | 2026-03-05 00:46:33.619933 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-05 00:46:33.619941 | orchestrator | Thursday 05 March 2026 00:46:32 +0000 (0:00:00.250) 0:00:17.019 ******** 2026-03-05 00:46:33.619947 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:46:33.619955 | orchestrator | 2026-03-05 00:46:33.619962 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-05 00:46:33.619997 | orchestrator | Thursday 05 March 2026 00:46:32 +0000 (0:00:00.205) 0:00:17.225 ******** 2026-03-05 00:46:33.620009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:33.620016 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:33.620024 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.620031 | orchestrator | 2026-03-05 00:46:33.620038 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-05 00:46:33.620046 | orchestrator | Thursday 05 March 2026 00:46:33 +0000 (0:00:00.190) 0:00:17.415 ******** 2026-03-05 00:46:33.620053 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:33.620060 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:33.620068 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.620075 | orchestrator | 2026-03-05 00:46:33.620082 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-05 00:46:33.620090 | orchestrator | Thursday 05 March 2026 00:46:33 +0000 (0:00:00.183) 0:00:17.599 ******** 2026-03-05 00:46:33.620097 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:33.620105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:33.620112 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.620120 | orchestrator | 2026-03-05 00:46:33.620127 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-05 00:46:33.620134 | orchestrator | Thursday 05 March 2026 00:46:33 +0000 (0:00:00.187) 0:00:17.786 ******** 2026-03-05 00:46:33.620176 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:33.620184 | orchestrator | 2026-03-05 00:46:33.620191 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-05 00:46:33.620205 | orchestrator | Thursday 05 March 2026 00:46:33 +0000 (0:00:00.164) 0:00:17.951 ******** 2026-03-05 00:46:40.783816 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.783895 | orchestrator | 2026-03-05 00:46:40.783931 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-05 00:46:40.783939 | orchestrator | Thursday 05 March 2026 00:46:33 +0000 (0:00:00.150) 0:00:18.101 ******** 2026-03-05 00:46:40.783944 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.783949 | orchestrator | 2026-03-05 00:46:40.783954 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-05 00:46:40.783959 | orchestrator | Thursday 05 March 2026 00:46:33 +0000 (0:00:00.168) 0:00:18.270 ******** 2026-03-05 00:46:40.783964 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:46:40.783969 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-05 00:46:40.783974 | orchestrator | } 2026-03-05 00:46:40.783979 | orchestrator | 2026-03-05 00:46:40.783984 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-05 00:46:40.783989 | orchestrator | Thursday 05 March 2026 00:46:34 +0000 (0:00:00.411) 0:00:18.682 ******** 2026-03-05 00:46:40.783993 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:46:40.783998 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-05 00:46:40.784003 | orchestrator | } 2026-03-05 00:46:40.784007 | orchestrator | 2026-03-05 00:46:40.784012 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-05 00:46:40.784016 | orchestrator | Thursday 05 March 2026 00:46:34 +0000 (0:00:00.159) 0:00:18.841 ******** 2026-03-05 00:46:40.784021 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:46:40.784027 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-05 00:46:40.784032 | orchestrator | } 2026-03-05 00:46:40.784037 | orchestrator | 2026-03-05 00:46:40.784041 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-05 00:46:40.784046 | orchestrator | Thursday 05 March 2026 00:46:34 +0000 (0:00:00.161) 0:00:19.002 ******** 2026-03-05 00:46:40.784051 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:46:40.784056 | orchestrator | 2026-03-05 00:46:40.784060 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-05 00:46:40.784065 | orchestrator | Thursday 05 March 2026 00:46:35 +0000 (0:00:00.776) 0:00:19.779 ******** 2026-03-05 00:46:40.784070 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:46:40.784075 | orchestrator | 2026-03-05 00:46:40.784079 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-05 00:46:40.784084 | orchestrator | Thursday 05 March 2026 00:46:36 +0000 (0:00:00.601) 0:00:20.380 ******** 2026-03-05 00:46:40.784088 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:46:40.784093 | orchestrator | 2026-03-05 00:46:40.784098 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-05 00:46:40.784102 | orchestrator | Thursday 05 March 2026 00:46:36 +0000 (0:00:00.611) 0:00:20.992 ******** 2026-03-05 00:46:40.784107 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:46:40.784112 | orchestrator | 2026-03-05 00:46:40.784116 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-05 00:46:40.784121 | orchestrator | Thursday 05 March 2026 00:46:36 +0000 (0:00:00.177) 0:00:21.170 ******** 2026-03-05 00:46:40.784126 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784130 | orchestrator | 2026-03-05 00:46:40.784135 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-05 00:46:40.784140 | orchestrator | Thursday 05 March 2026 00:46:36 +0000 (0:00:00.151) 0:00:21.321 ******** 2026-03-05 00:46:40.784144 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784149 | orchestrator | 2026-03-05 00:46:40.784153 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-05 00:46:40.784175 | orchestrator | Thursday 05 March 2026 00:46:37 +0000 (0:00:00.122) 0:00:21.443 ******** 2026-03-05 00:46:40.784197 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:46:40.784206 | orchestrator |  "vgs_report": { 2026-03-05 00:46:40.784213 | orchestrator |  "vg": [] 2026-03-05 00:46:40.784221 | orchestrator |  } 2026-03-05 00:46:40.784228 | orchestrator | } 2026-03-05 00:46:40.784232 | orchestrator | 2026-03-05 00:46:40.784237 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-05 00:46:40.784242 | orchestrator | Thursday 05 March 2026 00:46:37 +0000 (0:00:00.171) 0:00:21.615 ******** 2026-03-05 00:46:40.784246 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784251 | orchestrator | 2026-03-05 00:46:40.784256 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-05 00:46:40.784260 | orchestrator | Thursday 05 March 2026 00:46:37 +0000 (0:00:00.177) 0:00:21.792 ******** 2026-03-05 00:46:40.784265 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784269 | orchestrator | 2026-03-05 00:46:40.784274 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-05 00:46:40.784278 | orchestrator | Thursday 05 March 2026 00:46:37 +0000 (0:00:00.165) 0:00:21.959 ******** 2026-03-05 00:46:40.784283 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784288 | orchestrator | 2026-03-05 00:46:40.784292 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-05 00:46:40.784297 | orchestrator | Thursday 05 March 2026 00:46:37 +0000 (0:00:00.370) 0:00:22.330 ******** 2026-03-05 00:46:40.784301 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784306 | orchestrator | 2026-03-05 00:46:40.784310 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-05 00:46:40.784315 | orchestrator | Thursday 05 March 2026 00:46:38 +0000 (0:00:00.161) 0:00:22.491 ******** 2026-03-05 00:46:40.784320 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784324 | orchestrator | 2026-03-05 00:46:40.784329 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-05 00:46:40.784334 | orchestrator | Thursday 05 March 2026 00:46:38 +0000 (0:00:00.153) 0:00:22.645 ******** 2026-03-05 00:46:40.784338 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784343 | orchestrator | 2026-03-05 00:46:40.784347 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-05 00:46:40.784388 | orchestrator | Thursday 05 March 2026 00:46:38 +0000 (0:00:00.168) 0:00:22.814 ******** 2026-03-05 00:46:40.784395 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784400 | orchestrator | 2026-03-05 00:46:40.784406 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-05 00:46:40.784411 | orchestrator | Thursday 05 March 2026 00:46:38 +0000 (0:00:00.155) 0:00:22.969 ******** 2026-03-05 00:46:40.784429 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784435 | orchestrator | 2026-03-05 00:46:40.784440 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-05 00:46:40.784445 | orchestrator | Thursday 05 March 2026 00:46:38 +0000 (0:00:00.138) 0:00:23.108 ******** 2026-03-05 00:46:40.784451 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784456 | orchestrator | 2026-03-05 00:46:40.784461 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-05 00:46:40.784466 | orchestrator | Thursday 05 March 2026 00:46:38 +0000 (0:00:00.125) 0:00:23.233 ******** 2026-03-05 00:46:40.784472 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784477 | orchestrator | 2026-03-05 00:46:40.784482 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-05 00:46:40.784487 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:00.137) 0:00:23.371 ******** 2026-03-05 00:46:40.784493 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784498 | orchestrator | 2026-03-05 00:46:40.784503 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-05 00:46:40.784508 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:00.155) 0:00:23.526 ******** 2026-03-05 00:46:40.784520 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784526 | orchestrator | 2026-03-05 00:46:40.784531 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-05 00:46:40.784537 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:00.136) 0:00:23.662 ******** 2026-03-05 00:46:40.784542 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784547 | orchestrator | 2026-03-05 00:46:40.784553 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-05 00:46:40.784559 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:00.135) 0:00:23.798 ******** 2026-03-05 00:46:40.784565 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784570 | orchestrator | 2026-03-05 00:46:40.784575 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-05 00:46:40.784581 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:00.155) 0:00:23.954 ******** 2026-03-05 00:46:40.784588 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:40.784596 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:40.784602 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784607 | orchestrator | 2026-03-05 00:46:40.784612 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-05 00:46:40.784618 | orchestrator | Thursday 05 March 2026 00:46:39 +0000 (0:00:00.372) 0:00:24.327 ******** 2026-03-05 00:46:40.784624 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:40.784629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:40.784634 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784640 | orchestrator | 2026-03-05 00:46:40.784645 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-05 00:46:40.784651 | orchestrator | Thursday 05 March 2026 00:46:40 +0000 (0:00:00.150) 0:00:24.477 ******** 2026-03-05 00:46:40.784657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:40.784662 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:40.784668 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784673 | orchestrator | 2026-03-05 00:46:40.784679 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-05 00:46:40.784685 | orchestrator | Thursday 05 March 2026 00:46:40 +0000 (0:00:00.171) 0:00:24.648 ******** 2026-03-05 00:46:40.784691 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:40.784696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:40.784701 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784707 | orchestrator | 2026-03-05 00:46:40.784712 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-05 00:46:40.784718 | orchestrator | Thursday 05 March 2026 00:46:40 +0000 (0:00:00.154) 0:00:24.802 ******** 2026-03-05 00:46:40.784723 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:40.784729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:40.784738 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:40.784743 | orchestrator | 2026-03-05 00:46:40.784747 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-05 00:46:40.784758 | orchestrator | Thursday 05 March 2026 00:46:40 +0000 (0:00:00.162) 0:00:24.965 ******** 2026-03-05 00:46:40.784766 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:46.548280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:46.548367 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:46.548377 | orchestrator | 2026-03-05 00:46:46.548384 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-05 00:46:46.548393 | orchestrator | Thursday 05 March 2026 00:46:40 +0000 (0:00:00.153) 0:00:25.118 ******** 2026-03-05 00:46:46.548400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:46.548407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:46.548413 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:46.548420 | orchestrator | 2026-03-05 00:46:46.548427 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-05 00:46:46.548433 | orchestrator | Thursday 05 March 2026 00:46:40 +0000 (0:00:00.168) 0:00:25.287 ******** 2026-03-05 00:46:46.548439 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:46.548446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:46.548452 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:46.548459 | orchestrator | 2026-03-05 00:46:46.548465 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-05 00:46:46.548472 | orchestrator | Thursday 05 March 2026 00:46:41 +0000 (0:00:00.156) 0:00:25.443 ******** 2026-03-05 00:46:46.548478 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:46:46.548485 | orchestrator | 2026-03-05 00:46:46.548492 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-05 00:46:46.548498 | orchestrator | Thursday 05 March 2026 00:46:41 +0000 (0:00:00.586) 0:00:26.030 ******** 2026-03-05 00:46:46.548504 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:46:46.548511 | orchestrator | 2026-03-05 00:46:46.548517 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-05 00:46:46.548523 | orchestrator | Thursday 05 March 2026 00:46:42 +0000 (0:00:00.609) 0:00:26.639 ******** 2026-03-05 00:46:46.548529 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:46:46.548536 | orchestrator | 2026-03-05 00:46:46.548542 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-05 00:46:46.548548 | orchestrator | Thursday 05 March 2026 00:46:42 +0000 (0:00:00.160) 0:00:26.799 ******** 2026-03-05 00:46:46.548555 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'vg_name': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'}) 2026-03-05 00:46:46.548576 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'vg_name': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'}) 2026-03-05 00:46:46.548583 | orchestrator | 2026-03-05 00:46:46.548589 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-05 00:46:46.548595 | orchestrator | Thursday 05 March 2026 00:46:42 +0000 (0:00:00.163) 0:00:26.963 ******** 2026-03-05 00:46:46.548602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:46.548665 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:46.548673 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:46.548679 | orchestrator | 2026-03-05 00:46:46.548685 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-05 00:46:46.548692 | orchestrator | Thursday 05 March 2026 00:46:43 +0000 (0:00:00.401) 0:00:27.364 ******** 2026-03-05 00:46:46.548698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:46.548704 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:46.548711 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:46.548717 | orchestrator | 2026-03-05 00:46:46.548724 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-05 00:46:46.548730 | orchestrator | Thursday 05 March 2026 00:46:43 +0000 (0:00:00.168) 0:00:27.533 ******** 2026-03-05 00:46:46.548736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'})  2026-03-05 00:46:46.548743 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'})  2026-03-05 00:46:46.548749 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:46:46.548755 | orchestrator | 2026-03-05 00:46:46.548762 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-05 00:46:46.548768 | orchestrator | Thursday 05 March 2026 00:46:43 +0000 (0:00:00.167) 0:00:27.700 ******** 2026-03-05 00:46:46.548786 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 00:46:46.548792 | orchestrator |  "lvm_report": { 2026-03-05 00:46:46.548799 | orchestrator |  "lv": [ 2026-03-05 00:46:46.548805 | orchestrator |  { 2026-03-05 00:46:46.548811 | orchestrator |  "lv_name": "osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2", 2026-03-05 00:46:46.548819 | orchestrator |  "vg_name": "ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2" 2026-03-05 00:46:46.548825 | orchestrator |  }, 2026-03-05 00:46:46.548831 | orchestrator |  { 2026-03-05 00:46:46.548839 | orchestrator |  "lv_name": "osd-block-8e61642d-a609-5f4c-883e-a16b698ed397", 2026-03-05 00:46:46.548847 | orchestrator |  "vg_name": "ceph-8e61642d-a609-5f4c-883e-a16b698ed397" 2026-03-05 00:46:46.548854 | orchestrator |  } 2026-03-05 00:46:46.548861 | orchestrator |  ], 2026-03-05 00:46:46.548869 | orchestrator |  "pv": [ 2026-03-05 00:46:46.548875 | orchestrator |  { 2026-03-05 00:46:46.548883 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-05 00:46:46.548890 | orchestrator |  "vg_name": "ceph-8e61642d-a609-5f4c-883e-a16b698ed397" 2026-03-05 00:46:46.548948 | orchestrator |  }, 2026-03-05 00:46:46.548974 | orchestrator |  { 2026-03-05 00:46:46.548985 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-05 00:46:46.549010 | orchestrator |  "vg_name": "ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2" 2026-03-05 00:46:46.549021 | orchestrator |  } 2026-03-05 00:46:46.549030 | orchestrator |  ] 2026-03-05 00:46:46.549041 | orchestrator |  } 2026-03-05 00:46:46.549051 | orchestrator | } 2026-03-05 00:46:46.549060 | orchestrator | 2026-03-05 00:46:46.549070 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-05 00:46:46.549080 | orchestrator | 2026-03-05 00:46:46.549091 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:46:46.549100 | orchestrator | Thursday 05 March 2026 00:46:43 +0000 (0:00:00.315) 0:00:28.016 ******** 2026-03-05 00:46:46.549120 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-05 00:46:46.549130 | orchestrator | 2026-03-05 00:46:46.549139 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:46:46.549150 | orchestrator | Thursday 05 March 2026 00:46:43 +0000 (0:00:00.277) 0:00:28.294 ******** 2026-03-05 00:46:46.549158 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:46:46.549168 | orchestrator | 2026-03-05 00:46:46.549177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:46.549187 | orchestrator | Thursday 05 March 2026 00:46:44 +0000 (0:00:00.255) 0:00:28.549 ******** 2026-03-05 00:46:46.549197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-05 00:46:46.549207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-05 00:46:46.549217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-05 00:46:46.549227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-05 00:46:46.549237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-05 00:46:46.549247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-05 00:46:46.549258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-05 00:46:46.549276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-05 00:46:46.549287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-05 00:46:46.549298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-05 00:46:46.549308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-05 00:46:46.549323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-05 00:46:46.549333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-05 00:46:46.549343 | orchestrator | 2026-03-05 00:46:46.549353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:46.549363 | orchestrator | Thursday 05 March 2026 00:46:44 +0000 (0:00:00.472) 0:00:29.021 ******** 2026-03-05 00:46:46.549372 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:46.549381 | orchestrator | 2026-03-05 00:46:46.549390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:46.549399 | orchestrator | Thursday 05 March 2026 00:46:44 +0000 (0:00:00.217) 0:00:29.238 ******** 2026-03-05 00:46:46.549409 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:46.549418 | orchestrator | 2026-03-05 00:46:46.549428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:46.549438 | orchestrator | Thursday 05 March 2026 00:46:45 +0000 (0:00:00.221) 0:00:29.460 ******** 2026-03-05 00:46:46.549448 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:46.549458 | orchestrator | 2026-03-05 00:46:46.549468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:46.549479 | orchestrator | Thursday 05 March 2026 00:46:45 +0000 (0:00:00.715) 0:00:30.176 ******** 2026-03-05 00:46:46.549489 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:46.549498 | orchestrator | 2026-03-05 00:46:46.549508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:46.549517 | orchestrator | Thursday 05 March 2026 00:46:46 +0000 (0:00:00.267) 0:00:30.443 ******** 2026-03-05 00:46:46.549526 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:46.549535 | orchestrator | 2026-03-05 00:46:46.549544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:46.549554 | orchestrator | Thursday 05 March 2026 00:46:46 +0000 (0:00:00.208) 0:00:30.652 ******** 2026-03-05 00:46:46.549574 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:46.549584 | orchestrator | 2026-03-05 00:46:46.549607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:58.583145 | orchestrator | Thursday 05 March 2026 00:46:46 +0000 (0:00:00.228) 0:00:30.881 ******** 2026-03-05 00:46:58.583247 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583260 | orchestrator | 2026-03-05 00:46:58.583268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:58.583275 | orchestrator | Thursday 05 March 2026 00:46:46 +0000 (0:00:00.211) 0:00:31.093 ******** 2026-03-05 00:46:58.583282 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583289 | orchestrator | 2026-03-05 00:46:58.583295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:58.583302 | orchestrator | Thursday 05 March 2026 00:46:46 +0000 (0:00:00.216) 0:00:31.309 ******** 2026-03-05 00:46:58.583308 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6) 2026-03-05 00:46:58.583317 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6) 2026-03-05 00:46:58.583323 | orchestrator | 2026-03-05 00:46:58.583329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:58.583336 | orchestrator | Thursday 05 March 2026 00:46:47 +0000 (0:00:00.446) 0:00:31.756 ******** 2026-03-05 00:46:58.583342 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980) 2026-03-05 00:46:58.583349 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980) 2026-03-05 00:46:58.583356 | orchestrator | 2026-03-05 00:46:58.583362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:58.583369 | orchestrator | Thursday 05 March 2026 00:46:47 +0000 (0:00:00.461) 0:00:32.218 ******** 2026-03-05 00:46:58.583375 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b) 2026-03-05 00:46:58.583381 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b) 2026-03-05 00:46:58.583387 | orchestrator | 2026-03-05 00:46:58.583393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:58.583405 | orchestrator | Thursday 05 March 2026 00:46:48 +0000 (0:00:00.462) 0:00:32.680 ******** 2026-03-05 00:46:58.583411 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5) 2026-03-05 00:46:58.583417 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5) 2026-03-05 00:46:58.583423 | orchestrator | 2026-03-05 00:46:58.583430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:46:58.583436 | orchestrator | Thursday 05 March 2026 00:46:49 +0000 (0:00:00.682) 0:00:33.363 ******** 2026-03-05 00:46:58.583442 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:46:58.583449 | orchestrator | 2026-03-05 00:46:58.583455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583462 | orchestrator | Thursday 05 March 2026 00:46:49 +0000 (0:00:00.639) 0:00:34.002 ******** 2026-03-05 00:46:58.583469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-05 00:46:58.583477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-05 00:46:58.583483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-05 00:46:58.583489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-05 00:46:58.583496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-05 00:46:58.583521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-05 00:46:58.583549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-05 00:46:58.583556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-05 00:46:58.583562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-05 00:46:58.583569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-05 00:46:58.583575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-05 00:46:58.583581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-05 00:46:58.583587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-05 00:46:58.583593 | orchestrator | 2026-03-05 00:46:58.583599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583605 | orchestrator | Thursday 05 March 2026 00:46:50 +0000 (0:00:01.071) 0:00:35.074 ******** 2026-03-05 00:46:58.583610 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583616 | orchestrator | 2026-03-05 00:46:58.583621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583627 | orchestrator | Thursday 05 March 2026 00:46:50 +0000 (0:00:00.217) 0:00:35.291 ******** 2026-03-05 00:46:58.583633 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583638 | orchestrator | 2026-03-05 00:46:58.583644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583650 | orchestrator | Thursday 05 March 2026 00:46:51 +0000 (0:00:00.251) 0:00:35.543 ******** 2026-03-05 00:46:58.583657 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583663 | orchestrator | 2026-03-05 00:46:58.583688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583695 | orchestrator | Thursday 05 March 2026 00:46:51 +0000 (0:00:00.201) 0:00:35.744 ******** 2026-03-05 00:46:58.583701 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583707 | orchestrator | 2026-03-05 00:46:58.583714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583720 | orchestrator | Thursday 05 March 2026 00:46:51 +0000 (0:00:00.215) 0:00:35.960 ******** 2026-03-05 00:46:58.583726 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583733 | orchestrator | 2026-03-05 00:46:58.583739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583745 | orchestrator | Thursday 05 March 2026 00:46:51 +0000 (0:00:00.192) 0:00:36.153 ******** 2026-03-05 00:46:58.583751 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583757 | orchestrator | 2026-03-05 00:46:58.583763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583769 | orchestrator | Thursday 05 March 2026 00:46:52 +0000 (0:00:00.212) 0:00:36.365 ******** 2026-03-05 00:46:58.583775 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583780 | orchestrator | 2026-03-05 00:46:58.583788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583795 | orchestrator | Thursday 05 March 2026 00:46:52 +0000 (0:00:00.210) 0:00:36.575 ******** 2026-03-05 00:46:58.583802 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583808 | orchestrator | 2026-03-05 00:46:58.583814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583820 | orchestrator | Thursday 05 March 2026 00:46:52 +0000 (0:00:00.254) 0:00:36.830 ******** 2026-03-05 00:46:58.583826 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-05 00:46:58.583833 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-05 00:46:58.583839 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-05 00:46:58.583846 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-05 00:46:58.583852 | orchestrator | 2026-03-05 00:46:58.583858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.583874 | orchestrator | Thursday 05 March 2026 00:46:53 +0000 (0:00:00.889) 0:00:37.719 ******** 2026-03-05 00:46:58.583881 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.583997 | orchestrator | 2026-03-05 00:46:58.584026 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.584031 | orchestrator | Thursday 05 March 2026 00:46:53 +0000 (0:00:00.216) 0:00:37.936 ******** 2026-03-05 00:46:58.584036 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.584040 | orchestrator | 2026-03-05 00:46:58.584045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.584050 | orchestrator | Thursday 05 March 2026 00:46:54 +0000 (0:00:00.706) 0:00:38.642 ******** 2026-03-05 00:46:58.584054 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.584059 | orchestrator | 2026-03-05 00:46:58.584063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:46:58.584068 | orchestrator | Thursday 05 March 2026 00:46:54 +0000 (0:00:00.197) 0:00:38.839 ******** 2026-03-05 00:46:58.584072 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.584076 | orchestrator | 2026-03-05 00:46:58.584081 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-05 00:46:58.584093 | orchestrator | Thursday 05 March 2026 00:46:54 +0000 (0:00:00.213) 0:00:39.053 ******** 2026-03-05 00:46:58.584098 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.584103 | orchestrator | 2026-03-05 00:46:58.584107 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-05 00:46:58.584111 | orchestrator | Thursday 05 March 2026 00:46:54 +0000 (0:00:00.153) 0:00:39.207 ******** 2026-03-05 00:46:58.584116 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '487cf15b-a3c4-55bb-8565-d1e78d85d824'}}) 2026-03-05 00:46:58.584121 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04f48836-d47d-5181-a61a-7e2c62572595'}}) 2026-03-05 00:46:58.584125 | orchestrator | 2026-03-05 00:46:58.584130 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-05 00:46:58.584135 | orchestrator | Thursday 05 March 2026 00:46:55 +0000 (0:00:00.215) 0:00:39.422 ******** 2026-03-05 00:46:58.584141 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'}) 2026-03-05 00:46:58.584147 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'}) 2026-03-05 00:46:58.584151 | orchestrator | 2026-03-05 00:46:58.584156 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-05 00:46:58.584160 | orchestrator | Thursday 05 March 2026 00:46:57 +0000 (0:00:01.914) 0:00:41.337 ******** 2026-03-05 00:46:58.584165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:46:58.584171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:46:58.584175 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:46:58.584180 | orchestrator | 2026-03-05 00:46:58.584184 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-05 00:46:58.584189 | orchestrator | Thursday 05 March 2026 00:46:57 +0000 (0:00:00.157) 0:00:41.494 ******** 2026-03-05 00:46:58.584193 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'}) 2026-03-05 00:46:58.584208 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'}) 2026-03-05 00:47:04.414601 | orchestrator | 2026-03-05 00:47:04.414700 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-05 00:47:04.414754 | orchestrator | Thursday 05 March 2026 00:46:58 +0000 (0:00:01.420) 0:00:42.914 ******** 2026-03-05 00:47:04.414761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:04.414767 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:04.414772 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.414777 | orchestrator | 2026-03-05 00:47:04.414781 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-05 00:47:04.414785 | orchestrator | Thursday 05 March 2026 00:46:58 +0000 (0:00:00.198) 0:00:43.112 ******** 2026-03-05 00:47:04.414790 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.414794 | orchestrator | 2026-03-05 00:47:04.414798 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-05 00:47:04.414802 | orchestrator | Thursday 05 March 2026 00:46:58 +0000 (0:00:00.134) 0:00:43.247 ******** 2026-03-05 00:47:04.414806 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:04.414810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:04.414814 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.414818 | orchestrator | 2026-03-05 00:47:04.414822 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-05 00:47:04.414825 | orchestrator | Thursday 05 March 2026 00:46:59 +0000 (0:00:00.181) 0:00:43.428 ******** 2026-03-05 00:47:04.414829 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.414833 | orchestrator | 2026-03-05 00:47:04.414837 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-05 00:47:04.414841 | orchestrator | Thursday 05 March 2026 00:46:59 +0000 (0:00:00.164) 0:00:43.592 ******** 2026-03-05 00:47:04.414845 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:04.414852 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:04.414858 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.414865 | orchestrator | 2026-03-05 00:47:04.414871 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-05 00:47:04.414955 | orchestrator | Thursday 05 March 2026 00:46:59 +0000 (0:00:00.389) 0:00:43.982 ******** 2026-03-05 00:47:04.414966 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.414975 | orchestrator | 2026-03-05 00:47:04.414982 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-05 00:47:04.414988 | orchestrator | Thursday 05 March 2026 00:46:59 +0000 (0:00:00.140) 0:00:44.122 ******** 2026-03-05 00:47:04.414994 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:04.414999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:04.415005 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415011 | orchestrator | 2026-03-05 00:47:04.415016 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-05 00:47:04.415022 | orchestrator | Thursday 05 March 2026 00:46:59 +0000 (0:00:00.157) 0:00:44.280 ******** 2026-03-05 00:47:04.415028 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:04.415036 | orchestrator | 2026-03-05 00:47:04.415041 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-05 00:47:04.415056 | orchestrator | Thursday 05 March 2026 00:47:00 +0000 (0:00:00.152) 0:00:44.433 ******** 2026-03-05 00:47:04.415063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:04.415069 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:04.415075 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415081 | orchestrator | 2026-03-05 00:47:04.415087 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-05 00:47:04.415093 | orchestrator | Thursday 05 March 2026 00:47:00 +0000 (0:00:00.140) 0:00:44.573 ******** 2026-03-05 00:47:04.415098 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:04.415104 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:04.415109 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415115 | orchestrator | 2026-03-05 00:47:04.415120 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-05 00:47:04.415143 | orchestrator | Thursday 05 March 2026 00:47:00 +0000 (0:00:00.169) 0:00:44.743 ******** 2026-03-05 00:47:04.415150 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:04.415156 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:04.415163 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415169 | orchestrator | 2026-03-05 00:47:04.415176 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-05 00:47:04.415181 | orchestrator | Thursday 05 March 2026 00:47:00 +0000 (0:00:00.156) 0:00:44.899 ******** 2026-03-05 00:47:04.415188 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415194 | orchestrator | 2026-03-05 00:47:04.415202 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-05 00:47:04.415217 | orchestrator | Thursday 05 March 2026 00:47:00 +0000 (0:00:00.135) 0:00:45.035 ******** 2026-03-05 00:47:04.415227 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415239 | orchestrator | 2026-03-05 00:47:04.415251 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-05 00:47:04.415263 | orchestrator | Thursday 05 March 2026 00:47:00 +0000 (0:00:00.138) 0:00:45.174 ******** 2026-03-05 00:47:04.415275 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415287 | orchestrator | 2026-03-05 00:47:04.415299 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-05 00:47:04.415311 | orchestrator | Thursday 05 March 2026 00:47:00 +0000 (0:00:00.156) 0:00:45.330 ******** 2026-03-05 00:47:04.415322 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:47:04.415334 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-05 00:47:04.415347 | orchestrator | } 2026-03-05 00:47:04.415359 | orchestrator | 2026-03-05 00:47:04.415371 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-05 00:47:04.415381 | orchestrator | Thursday 05 March 2026 00:47:01 +0000 (0:00:00.146) 0:00:45.477 ******** 2026-03-05 00:47:04.415387 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:47:04.415393 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-05 00:47:04.415399 | orchestrator | } 2026-03-05 00:47:04.415409 | orchestrator | 2026-03-05 00:47:04.415421 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-05 00:47:04.415430 | orchestrator | Thursday 05 March 2026 00:47:01 +0000 (0:00:00.157) 0:00:45.634 ******** 2026-03-05 00:47:04.415443 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:47:04.415450 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-05 00:47:04.415456 | orchestrator | } 2026-03-05 00:47:04.415463 | orchestrator | 2026-03-05 00:47:04.415469 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-05 00:47:04.415476 | orchestrator | Thursday 05 March 2026 00:47:01 +0000 (0:00:00.370) 0:00:46.004 ******** 2026-03-05 00:47:04.415482 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:04.415488 | orchestrator | 2026-03-05 00:47:04.415495 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-05 00:47:04.415502 | orchestrator | Thursday 05 March 2026 00:47:02 +0000 (0:00:00.552) 0:00:46.557 ******** 2026-03-05 00:47:04.415509 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:04.415515 | orchestrator | 2026-03-05 00:47:04.415522 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-05 00:47:04.415529 | orchestrator | Thursday 05 March 2026 00:47:02 +0000 (0:00:00.528) 0:00:47.086 ******** 2026-03-05 00:47:04.415536 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:04.415542 | orchestrator | 2026-03-05 00:47:04.415548 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-05 00:47:04.415555 | orchestrator | Thursday 05 March 2026 00:47:03 +0000 (0:00:00.547) 0:00:47.634 ******** 2026-03-05 00:47:04.415561 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:04.415567 | orchestrator | 2026-03-05 00:47:04.415574 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-05 00:47:04.415581 | orchestrator | Thursday 05 March 2026 00:47:03 +0000 (0:00:00.170) 0:00:47.805 ******** 2026-03-05 00:47:04.415588 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415595 | orchestrator | 2026-03-05 00:47:04.415609 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-05 00:47:04.415616 | orchestrator | Thursday 05 March 2026 00:47:03 +0000 (0:00:00.121) 0:00:47.926 ******** 2026-03-05 00:47:04.415622 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415628 | orchestrator | 2026-03-05 00:47:04.415634 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-05 00:47:04.415640 | orchestrator | Thursday 05 March 2026 00:47:03 +0000 (0:00:00.128) 0:00:48.054 ******** 2026-03-05 00:47:04.415647 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:47:04.415654 | orchestrator |  "vgs_report": { 2026-03-05 00:47:04.415660 | orchestrator |  "vg": [] 2026-03-05 00:47:04.415667 | orchestrator |  } 2026-03-05 00:47:04.415673 | orchestrator | } 2026-03-05 00:47:04.415680 | orchestrator | 2026-03-05 00:47:04.415687 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-05 00:47:04.415693 | orchestrator | Thursday 05 March 2026 00:47:03 +0000 (0:00:00.140) 0:00:48.195 ******** 2026-03-05 00:47:04.415700 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415707 | orchestrator | 2026-03-05 00:47:04.415714 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-05 00:47:04.415721 | orchestrator | Thursday 05 March 2026 00:47:03 +0000 (0:00:00.133) 0:00:48.329 ******** 2026-03-05 00:47:04.415727 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415734 | orchestrator | 2026-03-05 00:47:04.415740 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-05 00:47:04.415747 | orchestrator | Thursday 05 March 2026 00:47:04 +0000 (0:00:00.146) 0:00:48.475 ******** 2026-03-05 00:47:04.415754 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415761 | orchestrator | 2026-03-05 00:47:04.415767 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-05 00:47:04.415774 | orchestrator | Thursday 05 March 2026 00:47:04 +0000 (0:00:00.138) 0:00:48.614 ******** 2026-03-05 00:47:04.415781 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:04.415788 | orchestrator | 2026-03-05 00:47:04.415801 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-05 00:47:09.565481 | orchestrator | Thursday 05 March 2026 00:47:04 +0000 (0:00:00.132) 0:00:48.747 ******** 2026-03-05 00:47:09.565613 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.565630 | orchestrator | 2026-03-05 00:47:09.565643 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-05 00:47:09.565655 | orchestrator | Thursday 05 March 2026 00:47:04 +0000 (0:00:00.382) 0:00:49.130 ******** 2026-03-05 00:47:09.565666 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.565677 | orchestrator | 2026-03-05 00:47:09.565688 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-05 00:47:09.565699 | orchestrator | Thursday 05 March 2026 00:47:04 +0000 (0:00:00.153) 0:00:49.283 ******** 2026-03-05 00:47:09.565710 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.565721 | orchestrator | 2026-03-05 00:47:09.565732 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-05 00:47:09.565743 | orchestrator | Thursday 05 March 2026 00:47:05 +0000 (0:00:00.137) 0:00:49.421 ******** 2026-03-05 00:47:09.565753 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.565764 | orchestrator | 2026-03-05 00:47:09.565793 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-05 00:47:09.565805 | orchestrator | Thursday 05 March 2026 00:47:05 +0000 (0:00:00.148) 0:00:49.570 ******** 2026-03-05 00:47:09.565816 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.565827 | orchestrator | 2026-03-05 00:47:09.565849 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-05 00:47:09.565860 | orchestrator | Thursday 05 March 2026 00:47:05 +0000 (0:00:00.148) 0:00:49.719 ******** 2026-03-05 00:47:09.565871 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.565935 | orchestrator | 2026-03-05 00:47:09.565948 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-05 00:47:09.565959 | orchestrator | Thursday 05 March 2026 00:47:05 +0000 (0:00:00.150) 0:00:49.870 ******** 2026-03-05 00:47:09.565970 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.565981 | orchestrator | 2026-03-05 00:47:09.565992 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-05 00:47:09.566004 | orchestrator | Thursday 05 March 2026 00:47:05 +0000 (0:00:00.135) 0:00:50.005 ******** 2026-03-05 00:47:09.566072 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566086 | orchestrator | 2026-03-05 00:47:09.566101 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-05 00:47:09.566113 | orchestrator | Thursday 05 March 2026 00:47:05 +0000 (0:00:00.148) 0:00:50.154 ******** 2026-03-05 00:47:09.566124 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566135 | orchestrator | 2026-03-05 00:47:09.566146 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-05 00:47:09.566157 | orchestrator | Thursday 05 March 2026 00:47:05 +0000 (0:00:00.152) 0:00:50.306 ******** 2026-03-05 00:47:09.566168 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566179 | orchestrator | 2026-03-05 00:47:09.566190 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-05 00:47:09.566215 | orchestrator | Thursday 05 March 2026 00:47:06 +0000 (0:00:00.163) 0:00:50.470 ******** 2026-03-05 00:47:09.566228 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.566240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:09.566252 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566263 | orchestrator | 2026-03-05 00:47:09.566274 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-05 00:47:09.566285 | orchestrator | Thursday 05 March 2026 00:47:06 +0000 (0:00:00.165) 0:00:50.635 ******** 2026-03-05 00:47:09.566296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.566322 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:09.566341 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566358 | orchestrator | 2026-03-05 00:47:09.566373 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-05 00:47:09.566401 | orchestrator | Thursday 05 March 2026 00:47:06 +0000 (0:00:00.154) 0:00:50.790 ******** 2026-03-05 00:47:09.566422 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.566441 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:09.566459 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566476 | orchestrator | 2026-03-05 00:47:09.566494 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-05 00:47:09.566512 | orchestrator | Thursday 05 March 2026 00:47:06 +0000 (0:00:00.395) 0:00:51.186 ******** 2026-03-05 00:47:09.566531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.566550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:09.566570 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566588 | orchestrator | 2026-03-05 00:47:09.566631 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-05 00:47:09.566652 | orchestrator | Thursday 05 March 2026 00:47:07 +0000 (0:00:00.165) 0:00:51.351 ******** 2026-03-05 00:47:09.566671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.566690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:09.566710 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566729 | orchestrator | 2026-03-05 00:47:09.566749 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-05 00:47:09.566769 | orchestrator | Thursday 05 March 2026 00:47:07 +0000 (0:00:00.169) 0:00:51.520 ******** 2026-03-05 00:47:09.566789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.566808 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:09.566828 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566848 | orchestrator | 2026-03-05 00:47:09.566861 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-05 00:47:09.566872 | orchestrator | Thursday 05 March 2026 00:47:07 +0000 (0:00:00.175) 0:00:51.696 ******** 2026-03-05 00:47:09.566912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.566924 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:09.566935 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.566946 | orchestrator | 2026-03-05 00:47:09.566957 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-05 00:47:09.566968 | orchestrator | Thursday 05 March 2026 00:47:07 +0000 (0:00:00.165) 0:00:51.862 ******** 2026-03-05 00:47:09.566978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.567001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:09.567073 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.567086 | orchestrator | 2026-03-05 00:47:09.567097 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-05 00:47:09.567108 | orchestrator | Thursday 05 March 2026 00:47:07 +0000 (0:00:00.167) 0:00:52.029 ******** 2026-03-05 00:47:09.567119 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:09.567130 | orchestrator | 2026-03-05 00:47:09.567141 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-05 00:47:09.567152 | orchestrator | Thursday 05 March 2026 00:47:08 +0000 (0:00:00.561) 0:00:52.591 ******** 2026-03-05 00:47:09.567163 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:09.567174 | orchestrator | 2026-03-05 00:47:09.567185 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-05 00:47:09.567195 | orchestrator | Thursday 05 March 2026 00:47:08 +0000 (0:00:00.578) 0:00:53.169 ******** 2026-03-05 00:47:09.567206 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:47:09.567217 | orchestrator | 2026-03-05 00:47:09.567228 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-05 00:47:09.567239 | orchestrator | Thursday 05 March 2026 00:47:08 +0000 (0:00:00.157) 0:00:53.326 ******** 2026-03-05 00:47:09.567250 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'vg_name': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'}) 2026-03-05 00:47:09.567263 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'vg_name': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'}) 2026-03-05 00:47:09.567274 | orchestrator | 2026-03-05 00:47:09.567285 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-05 00:47:09.567295 | orchestrator | Thursday 05 March 2026 00:47:09 +0000 (0:00:00.205) 0:00:53.531 ******** 2026-03-05 00:47:09.567306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.567318 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:09.567329 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:09.567340 | orchestrator | 2026-03-05 00:47:09.567351 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-05 00:47:09.567361 | orchestrator | Thursday 05 March 2026 00:47:09 +0000 (0:00:00.183) 0:00:53.715 ******** 2026-03-05 00:47:09.567372 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:09.567395 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:16.149237 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:16.149327 | orchestrator | 2026-03-05 00:47:16.149347 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-05 00:47:16.149356 | orchestrator | Thursday 05 March 2026 00:47:09 +0000 (0:00:00.183) 0:00:53.899 ******** 2026-03-05 00:47:16.149363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'})  2026-03-05 00:47:16.149371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'})  2026-03-05 00:47:16.149378 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:47:16.149384 | orchestrator | 2026-03-05 00:47:16.149390 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-05 00:47:16.149417 | orchestrator | Thursday 05 March 2026 00:47:09 +0000 (0:00:00.159) 0:00:54.058 ******** 2026-03-05 00:47:16.149423 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 00:47:16.149430 | orchestrator |  "lvm_report": { 2026-03-05 00:47:16.149437 | orchestrator |  "lv": [ 2026-03-05 00:47:16.149444 | orchestrator |  { 2026-03-05 00:47:16.149457 | orchestrator |  "lv_name": "osd-block-04f48836-d47d-5181-a61a-7e2c62572595", 2026-03-05 00:47:16.149464 | orchestrator |  "vg_name": "ceph-04f48836-d47d-5181-a61a-7e2c62572595" 2026-03-05 00:47:16.149470 | orchestrator |  }, 2026-03-05 00:47:16.149476 | orchestrator |  { 2026-03-05 00:47:16.149482 | orchestrator |  "lv_name": "osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824", 2026-03-05 00:47:16.149489 | orchestrator |  "vg_name": "ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824" 2026-03-05 00:47:16.149495 | orchestrator |  } 2026-03-05 00:47:16.149501 | orchestrator |  ], 2026-03-05 00:47:16.149507 | orchestrator |  "pv": [ 2026-03-05 00:47:16.149514 | orchestrator |  { 2026-03-05 00:47:16.149520 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-05 00:47:16.149527 | orchestrator |  "vg_name": "ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824" 2026-03-05 00:47:16.149533 | orchestrator |  }, 2026-03-05 00:47:16.149539 | orchestrator |  { 2026-03-05 00:47:16.149546 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-05 00:47:16.149552 | orchestrator |  "vg_name": "ceph-04f48836-d47d-5181-a61a-7e2c62572595" 2026-03-05 00:47:16.149558 | orchestrator |  } 2026-03-05 00:47:16.149563 | orchestrator |  ] 2026-03-05 00:47:16.149570 | orchestrator |  } 2026-03-05 00:47:16.149576 | orchestrator | } 2026-03-05 00:47:16.149582 | orchestrator | 2026-03-05 00:47:16.149589 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-05 00:47:16.149595 | orchestrator | 2026-03-05 00:47:16.149601 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-05 00:47:16.149607 | orchestrator | Thursday 05 March 2026 00:47:10 +0000 (0:00:00.522) 0:00:54.581 ******** 2026-03-05 00:47:16.149614 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-05 00:47:16.149619 | orchestrator | 2026-03-05 00:47:16.149625 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-05 00:47:16.149631 | orchestrator | Thursday 05 March 2026 00:47:10 +0000 (0:00:00.273) 0:00:54.855 ******** 2026-03-05 00:47:16.149637 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:16.149642 | orchestrator | 2026-03-05 00:47:16.149648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.149653 | orchestrator | Thursday 05 March 2026 00:47:10 +0000 (0:00:00.249) 0:00:55.105 ******** 2026-03-05 00:47:16.149659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-05 00:47:16.149665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-05 00:47:16.149670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-05 00:47:16.149676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-05 00:47:16.149682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-05 00:47:16.149688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-05 00:47:16.149693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-05 00:47:16.149699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-05 00:47:16.149705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-05 00:47:16.149710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-05 00:47:16.149723 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-05 00:47:16.149728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-05 00:47:16.149734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-05 00:47:16.149739 | orchestrator | 2026-03-05 00:47:16.149745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.149754 | orchestrator | Thursday 05 March 2026 00:47:11 +0000 (0:00:00.434) 0:00:55.540 ******** 2026-03-05 00:47:16.149760 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:16.149765 | orchestrator | 2026-03-05 00:47:16.149771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.149777 | orchestrator | Thursday 05 March 2026 00:47:11 +0000 (0:00:00.226) 0:00:55.766 ******** 2026-03-05 00:47:16.149783 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:16.149788 | orchestrator | 2026-03-05 00:47:16.149795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.149818 | orchestrator | Thursday 05 March 2026 00:47:11 +0000 (0:00:00.196) 0:00:55.963 ******** 2026-03-05 00:47:16.149825 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:16.149831 | orchestrator | 2026-03-05 00:47:16.149837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.149843 | orchestrator | Thursday 05 March 2026 00:47:11 +0000 (0:00:00.202) 0:00:56.166 ******** 2026-03-05 00:47:16.149849 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:16.149854 | orchestrator | 2026-03-05 00:47:16.149860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.149954 | orchestrator | Thursday 05 March 2026 00:47:12 +0000 (0:00:00.229) 0:00:56.396 ******** 2026-03-05 00:47:16.149964 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:16.149971 | orchestrator | 2026-03-05 00:47:16.149977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.149984 | orchestrator | Thursday 05 March 2026 00:47:12 +0000 (0:00:00.683) 0:00:57.079 ******** 2026-03-05 00:47:16.149991 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:16.149996 | orchestrator | 2026-03-05 00:47:16.150002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.150009 | orchestrator | Thursday 05 March 2026 00:47:12 +0000 (0:00:00.186) 0:00:57.265 ******** 2026-03-05 00:47:16.150062 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:16.150069 | orchestrator | 2026-03-05 00:47:16.150075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.150080 | orchestrator | Thursday 05 March 2026 00:47:13 +0000 (0:00:00.239) 0:00:57.504 ******** 2026-03-05 00:47:16.150086 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:16.150092 | orchestrator | 2026-03-05 00:47:16.150097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.150103 | orchestrator | Thursday 05 March 2026 00:47:13 +0000 (0:00:00.218) 0:00:57.723 ******** 2026-03-05 00:47:16.150109 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90) 2026-03-05 00:47:16.150116 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90) 2026-03-05 00:47:16.150122 | orchestrator | 2026-03-05 00:47:16.150128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.150134 | orchestrator | Thursday 05 March 2026 00:47:13 +0000 (0:00:00.440) 0:00:58.163 ******** 2026-03-05 00:47:16.150141 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27) 2026-03-05 00:47:16.150146 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27) 2026-03-05 00:47:16.150152 | orchestrator | 2026-03-05 00:47:16.150159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.150177 | orchestrator | Thursday 05 March 2026 00:47:14 +0000 (0:00:00.443) 0:00:58.606 ******** 2026-03-05 00:47:16.150185 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5) 2026-03-05 00:47:16.150193 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5) 2026-03-05 00:47:16.150200 | orchestrator | 2026-03-05 00:47:16.150207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.150215 | orchestrator | Thursday 05 March 2026 00:47:14 +0000 (0:00:00.448) 0:00:59.055 ******** 2026-03-05 00:47:16.150223 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1) 2026-03-05 00:47:16.150230 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1) 2026-03-05 00:47:16.150238 | orchestrator | 2026-03-05 00:47:16.150246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-05 00:47:16.150254 | orchestrator | Thursday 05 March 2026 00:47:15 +0000 (0:00:00.553) 0:00:59.608 ******** 2026-03-05 00:47:16.150261 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-05 00:47:16.150269 | orchestrator | 2026-03-05 00:47:16.150277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:16.150285 | orchestrator | Thursday 05 March 2026 00:47:15 +0000 (0:00:00.448) 0:01:00.057 ******** 2026-03-05 00:47:16.150293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-05 00:47:16.150302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-05 00:47:16.150310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-05 00:47:16.150317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-05 00:47:16.150325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-05 00:47:16.150334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-05 00:47:16.150342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-05 00:47:16.150351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-05 00:47:16.150358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-05 00:47:16.150365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-05 00:47:16.150371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-05 00:47:16.150388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-05 00:47:25.745583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-05 00:47:25.745662 | orchestrator | 2026-03-05 00:47:25.745670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745678 | orchestrator | Thursday 05 March 2026 00:47:16 +0000 (0:00:00.415) 0:01:00.472 ******** 2026-03-05 00:47:25.745684 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745691 | orchestrator | 2026-03-05 00:47:25.745698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745704 | orchestrator | Thursday 05 March 2026 00:47:16 +0000 (0:00:00.222) 0:01:00.695 ******** 2026-03-05 00:47:25.745711 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745717 | orchestrator | 2026-03-05 00:47:25.745724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745729 | orchestrator | Thursday 05 March 2026 00:47:17 +0000 (0:00:00.792) 0:01:01.487 ******** 2026-03-05 00:47:25.745734 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745762 | orchestrator | 2026-03-05 00:47:25.745766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745771 | orchestrator | Thursday 05 March 2026 00:47:17 +0000 (0:00:00.277) 0:01:01.765 ******** 2026-03-05 00:47:25.745774 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745778 | orchestrator | 2026-03-05 00:47:25.745782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745786 | orchestrator | Thursday 05 March 2026 00:47:17 +0000 (0:00:00.208) 0:01:01.973 ******** 2026-03-05 00:47:25.745790 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745794 | orchestrator | 2026-03-05 00:47:25.745798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745802 | orchestrator | Thursday 05 March 2026 00:47:17 +0000 (0:00:00.203) 0:01:02.176 ******** 2026-03-05 00:47:25.745806 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745810 | orchestrator | 2026-03-05 00:47:25.745814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745817 | orchestrator | Thursday 05 March 2026 00:47:18 +0000 (0:00:00.237) 0:01:02.414 ******** 2026-03-05 00:47:25.745821 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745825 | orchestrator | 2026-03-05 00:47:25.745829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745833 | orchestrator | Thursday 05 March 2026 00:47:18 +0000 (0:00:00.208) 0:01:02.623 ******** 2026-03-05 00:47:25.745837 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745840 | orchestrator | 2026-03-05 00:47:25.745844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745848 | orchestrator | Thursday 05 March 2026 00:47:18 +0000 (0:00:00.245) 0:01:02.868 ******** 2026-03-05 00:47:25.745853 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-05 00:47:25.745914 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-05 00:47:25.745923 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-05 00:47:25.745929 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-05 00:47:25.745937 | orchestrator | 2026-03-05 00:47:25.745941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745945 | orchestrator | Thursday 05 March 2026 00:47:19 +0000 (0:00:00.673) 0:01:03.541 ******** 2026-03-05 00:47:25.745948 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745952 | orchestrator | 2026-03-05 00:47:25.745956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745960 | orchestrator | Thursday 05 March 2026 00:47:19 +0000 (0:00:00.215) 0:01:03.757 ******** 2026-03-05 00:47:25.745965 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745970 | orchestrator | 2026-03-05 00:47:25.745976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.745981 | orchestrator | Thursday 05 March 2026 00:47:19 +0000 (0:00:00.197) 0:01:03.954 ******** 2026-03-05 00:47:25.745991 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.745998 | orchestrator | 2026-03-05 00:47:25.746004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-05 00:47:25.746010 | orchestrator | Thursday 05 March 2026 00:47:19 +0000 (0:00:00.185) 0:01:04.139 ******** 2026-03-05 00:47:25.746045 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746054 | orchestrator | 2026-03-05 00:47:25.746058 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-05 00:47:25.746062 | orchestrator | Thursday 05 March 2026 00:47:20 +0000 (0:00:00.230) 0:01:04.370 ******** 2026-03-05 00:47:25.746066 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746070 | orchestrator | 2026-03-05 00:47:25.746074 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-05 00:47:25.746078 | orchestrator | Thursday 05 March 2026 00:47:20 +0000 (0:00:00.290) 0:01:04.660 ******** 2026-03-05 00:47:25.746082 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb27c3c1-5e00-588a-af48-66c3e9a20c72'}}) 2026-03-05 00:47:25.746091 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52eeae7c-0ac3-5716-aafe-40e466221a22'}}) 2026-03-05 00:47:25.746095 | orchestrator | 2026-03-05 00:47:25.746099 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-05 00:47:25.746103 | orchestrator | Thursday 05 March 2026 00:47:20 +0000 (0:00:00.191) 0:01:04.852 ******** 2026-03-05 00:47:25.746109 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'}) 2026-03-05 00:47:25.746114 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'}) 2026-03-05 00:47:25.746118 | orchestrator | 2026-03-05 00:47:25.746122 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-05 00:47:25.746137 | orchestrator | Thursday 05 March 2026 00:47:22 +0000 (0:00:01.983) 0:01:06.836 ******** 2026-03-05 00:47:25.746141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:25.746146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:25.746150 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746154 | orchestrator | 2026-03-05 00:47:25.746158 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-05 00:47:25.746161 | orchestrator | Thursday 05 March 2026 00:47:22 +0000 (0:00:00.179) 0:01:07.016 ******** 2026-03-05 00:47:25.746166 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'}) 2026-03-05 00:47:25.746170 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'}) 2026-03-05 00:47:25.746173 | orchestrator | 2026-03-05 00:47:25.746177 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-05 00:47:25.746181 | orchestrator | Thursday 05 March 2026 00:47:24 +0000 (0:00:01.441) 0:01:08.457 ******** 2026-03-05 00:47:25.746185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:25.746189 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:25.746192 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746196 | orchestrator | 2026-03-05 00:47:25.746200 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-05 00:47:25.746204 | orchestrator | Thursday 05 March 2026 00:47:24 +0000 (0:00:00.171) 0:01:08.629 ******** 2026-03-05 00:47:25.746208 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746211 | orchestrator | 2026-03-05 00:47:25.746215 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-05 00:47:25.746219 | orchestrator | Thursday 05 March 2026 00:47:24 +0000 (0:00:00.139) 0:01:08.768 ******** 2026-03-05 00:47:25.746223 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:25.746230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:25.746234 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746238 | orchestrator | 2026-03-05 00:47:25.746242 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-05 00:47:25.746246 | orchestrator | Thursday 05 March 2026 00:47:24 +0000 (0:00:00.168) 0:01:08.937 ******** 2026-03-05 00:47:25.746254 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746258 | orchestrator | 2026-03-05 00:47:25.746262 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-05 00:47:25.746265 | orchestrator | Thursday 05 March 2026 00:47:24 +0000 (0:00:00.145) 0:01:09.083 ******** 2026-03-05 00:47:25.746269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:25.746273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:25.746277 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746281 | orchestrator | 2026-03-05 00:47:25.746284 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-05 00:47:25.746288 | orchestrator | Thursday 05 March 2026 00:47:24 +0000 (0:00:00.161) 0:01:09.244 ******** 2026-03-05 00:47:25.746292 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746296 | orchestrator | 2026-03-05 00:47:25.746300 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-05 00:47:25.746303 | orchestrator | Thursday 05 March 2026 00:47:25 +0000 (0:00:00.153) 0:01:09.398 ******** 2026-03-05 00:47:25.746307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:25.746311 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:25.746315 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:25.746319 | orchestrator | 2026-03-05 00:47:25.746322 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-05 00:47:25.746326 | orchestrator | Thursday 05 March 2026 00:47:25 +0000 (0:00:00.160) 0:01:09.559 ******** 2026-03-05 00:47:25.746330 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:25.746334 | orchestrator | 2026-03-05 00:47:25.746338 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-05 00:47:25.746342 | orchestrator | Thursday 05 March 2026 00:47:25 +0000 (0:00:00.357) 0:01:09.916 ******** 2026-03-05 00:47:25.746349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:32.139706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:32.139763 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.139773 | orchestrator | 2026-03-05 00:47:32.139780 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-05 00:47:32.139788 | orchestrator | Thursday 05 March 2026 00:47:25 +0000 (0:00:00.162) 0:01:10.079 ******** 2026-03-05 00:47:32.139795 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:32.139802 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:32.139809 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.139816 | orchestrator | 2026-03-05 00:47:32.139822 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-05 00:47:32.139829 | orchestrator | Thursday 05 March 2026 00:47:25 +0000 (0:00:00.158) 0:01:10.237 ******** 2026-03-05 00:47:32.139835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:32.139841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:32.139893 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.139900 | orchestrator | 2026-03-05 00:47:32.139906 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-05 00:47:32.139913 | orchestrator | Thursday 05 March 2026 00:47:26 +0000 (0:00:00.162) 0:01:10.400 ******** 2026-03-05 00:47:32.139919 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.139925 | orchestrator | 2026-03-05 00:47:32.139932 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-05 00:47:32.139938 | orchestrator | Thursday 05 March 2026 00:47:26 +0000 (0:00:00.143) 0:01:10.543 ******** 2026-03-05 00:47:32.139945 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.139952 | orchestrator | 2026-03-05 00:47:32.139958 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-05 00:47:32.139965 | orchestrator | Thursday 05 March 2026 00:47:26 +0000 (0:00:00.139) 0:01:10.683 ******** 2026-03-05 00:47:32.139972 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.139978 | orchestrator | 2026-03-05 00:47:32.139985 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-05 00:47:32.139991 | orchestrator | Thursday 05 March 2026 00:47:26 +0000 (0:00:00.140) 0:01:10.823 ******** 2026-03-05 00:47:32.139998 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:47:32.140006 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-05 00:47:32.140013 | orchestrator | } 2026-03-05 00:47:32.140019 | orchestrator | 2026-03-05 00:47:32.140026 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-05 00:47:32.140030 | orchestrator | Thursday 05 March 2026 00:47:26 +0000 (0:00:00.144) 0:01:10.968 ******** 2026-03-05 00:47:32.140034 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:47:32.140038 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-05 00:47:32.140042 | orchestrator | } 2026-03-05 00:47:32.140046 | orchestrator | 2026-03-05 00:47:32.140050 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-05 00:47:32.140054 | orchestrator | Thursday 05 March 2026 00:47:26 +0000 (0:00:00.145) 0:01:11.114 ******** 2026-03-05 00:47:32.140057 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:47:32.140061 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-05 00:47:32.140065 | orchestrator | } 2026-03-05 00:47:32.140069 | orchestrator | 2026-03-05 00:47:32.140073 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-05 00:47:32.140076 | orchestrator | Thursday 05 March 2026 00:47:26 +0000 (0:00:00.151) 0:01:11.265 ******** 2026-03-05 00:47:32.140080 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:32.140084 | orchestrator | 2026-03-05 00:47:32.140088 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-05 00:47:32.140092 | orchestrator | Thursday 05 March 2026 00:47:27 +0000 (0:00:00.555) 0:01:11.821 ******** 2026-03-05 00:47:32.140095 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:32.140099 | orchestrator | 2026-03-05 00:47:32.140103 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-05 00:47:32.140106 | orchestrator | Thursday 05 March 2026 00:47:28 +0000 (0:00:00.541) 0:01:12.362 ******** 2026-03-05 00:47:32.140110 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:32.140114 | orchestrator | 2026-03-05 00:47:32.140118 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-05 00:47:32.140121 | orchestrator | Thursday 05 March 2026 00:47:28 +0000 (0:00:00.807) 0:01:13.170 ******** 2026-03-05 00:47:32.140125 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:32.140129 | orchestrator | 2026-03-05 00:47:32.140132 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-05 00:47:32.140136 | orchestrator | Thursday 05 March 2026 00:47:28 +0000 (0:00:00.165) 0:01:13.336 ******** 2026-03-05 00:47:32.140140 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140144 | orchestrator | 2026-03-05 00:47:32.140147 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-05 00:47:32.140156 | orchestrator | Thursday 05 March 2026 00:47:29 +0000 (0:00:00.130) 0:01:13.466 ******** 2026-03-05 00:47:32.140159 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140163 | orchestrator | 2026-03-05 00:47:32.140167 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-05 00:47:32.140179 | orchestrator | Thursday 05 March 2026 00:47:29 +0000 (0:00:00.099) 0:01:13.566 ******** 2026-03-05 00:47:32.140183 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:47:32.140187 | orchestrator |  "vgs_report": { 2026-03-05 00:47:32.140191 | orchestrator |  "vg": [] 2026-03-05 00:47:32.140204 | orchestrator |  } 2026-03-05 00:47:32.140208 | orchestrator | } 2026-03-05 00:47:32.140212 | orchestrator | 2026-03-05 00:47:32.140216 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-05 00:47:32.140220 | orchestrator | Thursday 05 March 2026 00:47:29 +0000 (0:00:00.144) 0:01:13.710 ******** 2026-03-05 00:47:32.140224 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140227 | orchestrator | 2026-03-05 00:47:32.140231 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-05 00:47:32.140235 | orchestrator | Thursday 05 March 2026 00:47:29 +0000 (0:00:00.130) 0:01:13.841 ******** 2026-03-05 00:47:32.140239 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140242 | orchestrator | 2026-03-05 00:47:32.140246 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-05 00:47:32.140250 | orchestrator | Thursday 05 March 2026 00:47:29 +0000 (0:00:00.146) 0:01:13.988 ******** 2026-03-05 00:47:32.140254 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140257 | orchestrator | 2026-03-05 00:47:32.140261 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-05 00:47:32.140265 | orchestrator | Thursday 05 March 2026 00:47:29 +0000 (0:00:00.139) 0:01:14.127 ******** 2026-03-05 00:47:32.140271 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140277 | orchestrator | 2026-03-05 00:47:32.140283 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-05 00:47:32.140289 | orchestrator | Thursday 05 March 2026 00:47:29 +0000 (0:00:00.141) 0:01:14.268 ******** 2026-03-05 00:47:32.140293 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140297 | orchestrator | 2026-03-05 00:47:32.140301 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-05 00:47:32.140305 | orchestrator | Thursday 05 March 2026 00:47:30 +0000 (0:00:00.148) 0:01:14.417 ******** 2026-03-05 00:47:32.140308 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140312 | orchestrator | 2026-03-05 00:47:32.140316 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-05 00:47:32.140320 | orchestrator | Thursday 05 March 2026 00:47:30 +0000 (0:00:00.135) 0:01:14.552 ******** 2026-03-05 00:47:32.140323 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140327 | orchestrator | 2026-03-05 00:47:32.140331 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-05 00:47:32.140335 | orchestrator | Thursday 05 March 2026 00:47:30 +0000 (0:00:00.168) 0:01:14.721 ******** 2026-03-05 00:47:32.140344 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140348 | orchestrator | 2026-03-05 00:47:32.140352 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-05 00:47:32.140356 | orchestrator | Thursday 05 March 2026 00:47:30 +0000 (0:00:00.363) 0:01:15.085 ******** 2026-03-05 00:47:32.140360 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140363 | orchestrator | 2026-03-05 00:47:32.140370 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-05 00:47:32.140374 | orchestrator | Thursday 05 March 2026 00:47:30 +0000 (0:00:00.160) 0:01:15.245 ******** 2026-03-05 00:47:32.140377 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140381 | orchestrator | 2026-03-05 00:47:32.140385 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-05 00:47:32.140389 | orchestrator | Thursday 05 March 2026 00:47:31 +0000 (0:00:00.149) 0:01:15.395 ******** 2026-03-05 00:47:32.140397 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140401 | orchestrator | 2026-03-05 00:47:32.140404 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-05 00:47:32.140408 | orchestrator | Thursday 05 March 2026 00:47:31 +0000 (0:00:00.140) 0:01:15.535 ******** 2026-03-05 00:47:32.140412 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140416 | orchestrator | 2026-03-05 00:47:32.140421 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-05 00:47:32.140428 | orchestrator | Thursday 05 March 2026 00:47:31 +0000 (0:00:00.138) 0:01:15.674 ******** 2026-03-05 00:47:32.140434 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140440 | orchestrator | 2026-03-05 00:47:32.140447 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-05 00:47:32.140453 | orchestrator | Thursday 05 March 2026 00:47:31 +0000 (0:00:00.154) 0:01:15.828 ******** 2026-03-05 00:47:32.140460 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140466 | orchestrator | 2026-03-05 00:47:32.140472 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-05 00:47:32.140478 | orchestrator | Thursday 05 March 2026 00:47:31 +0000 (0:00:00.149) 0:01:15.978 ******** 2026-03-05 00:47:32.140482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:32.140486 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:32.140489 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140493 | orchestrator | 2026-03-05 00:47:32.140497 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-05 00:47:32.140500 | orchestrator | Thursday 05 March 2026 00:47:31 +0000 (0:00:00.172) 0:01:16.150 ******** 2026-03-05 00:47:32.140504 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:32.140508 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:32.140512 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:32.140515 | orchestrator | 2026-03-05 00:47:32.140519 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-05 00:47:32.140523 | orchestrator | Thursday 05 March 2026 00:47:31 +0000 (0:00:00.162) 0:01:16.313 ******** 2026-03-05 00:47:32.140530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:35.174236 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:35.174293 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:35.174317 | orchestrator | 2026-03-05 00:47:35.174324 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-05 00:47:35.174332 | orchestrator | Thursday 05 March 2026 00:47:32 +0000 (0:00:00.160) 0:01:16.473 ******** 2026-03-05 00:47:35.174338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:35.174345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:35.174351 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:35.174357 | orchestrator | 2026-03-05 00:47:35.174364 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-05 00:47:35.174370 | orchestrator | Thursday 05 March 2026 00:47:32 +0000 (0:00:00.179) 0:01:16.653 ******** 2026-03-05 00:47:35.174393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:35.174401 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:35.174407 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:35.174414 | orchestrator | 2026-03-05 00:47:35.174421 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-05 00:47:35.174427 | orchestrator | Thursday 05 March 2026 00:47:32 +0000 (0:00:00.155) 0:01:16.808 ******** 2026-03-05 00:47:35.174433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:35.174439 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:35.174455 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:35.174462 | orchestrator | 2026-03-05 00:47:35.174469 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-05 00:47:35.174476 | orchestrator | Thursday 05 March 2026 00:47:32 +0000 (0:00:00.415) 0:01:17.224 ******** 2026-03-05 00:47:35.174482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:35.174489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:35.174495 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:35.174501 | orchestrator | 2026-03-05 00:47:35.174515 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-05 00:47:35.174519 | orchestrator | Thursday 05 March 2026 00:47:33 +0000 (0:00:00.167) 0:01:17.391 ******** 2026-03-05 00:47:35.174525 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:35.174532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:35.174538 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:35.174544 | orchestrator | 2026-03-05 00:47:35.174551 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-05 00:47:35.174557 | orchestrator | Thursday 05 March 2026 00:47:33 +0000 (0:00:00.166) 0:01:17.558 ******** 2026-03-05 00:47:35.174564 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:35.174571 | orchestrator | 2026-03-05 00:47:35.174578 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-05 00:47:35.174584 | orchestrator | Thursday 05 March 2026 00:47:33 +0000 (0:00:00.476) 0:01:18.035 ******** 2026-03-05 00:47:35.174592 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:35.174596 | orchestrator | 2026-03-05 00:47:35.174600 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-05 00:47:35.174603 | orchestrator | Thursday 05 March 2026 00:47:34 +0000 (0:00:00.458) 0:01:18.493 ******** 2026-03-05 00:47:35.174607 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:47:35.174611 | orchestrator | 2026-03-05 00:47:35.174615 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-05 00:47:35.174618 | orchestrator | Thursday 05 March 2026 00:47:34 +0000 (0:00:00.168) 0:01:18.661 ******** 2026-03-05 00:47:35.174622 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'vg_name': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'}) 2026-03-05 00:47:35.174627 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'vg_name': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'}) 2026-03-05 00:47:35.174636 | orchestrator | 2026-03-05 00:47:35.174639 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-05 00:47:35.174643 | orchestrator | Thursday 05 March 2026 00:47:34 +0000 (0:00:00.170) 0:01:18.832 ******** 2026-03-05 00:47:35.174659 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:35.174664 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:35.174667 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:35.174671 | orchestrator | 2026-03-05 00:47:35.174675 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-05 00:47:35.174691 | orchestrator | Thursday 05 March 2026 00:47:34 +0000 (0:00:00.157) 0:01:18.989 ******** 2026-03-05 00:47:35.174697 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:35.174704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:35.174711 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:35.174717 | orchestrator | 2026-03-05 00:47:35.174724 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-05 00:47:35.174731 | orchestrator | Thursday 05 March 2026 00:47:34 +0000 (0:00:00.160) 0:01:19.149 ******** 2026-03-05 00:47:35.174737 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'})  2026-03-05 00:47:35.174744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'})  2026-03-05 00:47:35.174750 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:47:35.174756 | orchestrator | 2026-03-05 00:47:35.174762 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-05 00:47:35.174768 | orchestrator | Thursday 05 March 2026 00:47:34 +0000 (0:00:00.169) 0:01:19.319 ******** 2026-03-05 00:47:35.174788 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 00:47:35.174795 | orchestrator |  "lvm_report": { 2026-03-05 00:47:35.174802 | orchestrator |  "lv": [ 2026-03-05 00:47:35.174808 | orchestrator |  { 2026-03-05 00:47:35.174815 | orchestrator |  "lv_name": "osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22", 2026-03-05 00:47:35.174826 | orchestrator |  "vg_name": "ceph-52eeae7c-0ac3-5716-aafe-40e466221a22" 2026-03-05 00:47:35.174833 | orchestrator |  }, 2026-03-05 00:47:35.174839 | orchestrator |  { 2026-03-05 00:47:35.174845 | orchestrator |  "lv_name": "osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72", 2026-03-05 00:47:35.174851 | orchestrator |  "vg_name": "ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72" 2026-03-05 00:47:35.174946 | orchestrator |  } 2026-03-05 00:47:35.174964 | orchestrator |  ], 2026-03-05 00:47:35.174970 | orchestrator |  "pv": [ 2026-03-05 00:47:35.174976 | orchestrator |  { 2026-03-05 00:47:35.174982 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-05 00:47:35.174989 | orchestrator |  "vg_name": "ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72" 2026-03-05 00:47:35.174995 | orchestrator |  }, 2026-03-05 00:47:35.175001 | orchestrator |  { 2026-03-05 00:47:35.175008 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-05 00:47:35.175014 | orchestrator |  "vg_name": "ceph-52eeae7c-0ac3-5716-aafe-40e466221a22" 2026-03-05 00:47:35.175021 | orchestrator |  } 2026-03-05 00:47:35.175028 | orchestrator |  ] 2026-03-05 00:47:35.175034 | orchestrator |  } 2026-03-05 00:47:35.175041 | orchestrator | } 2026-03-05 00:47:35.175056 | orchestrator | 2026-03-05 00:47:35.175063 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:47:35.175069 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-05 00:47:35.175077 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-05 00:47:35.175084 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-05 00:47:35.175091 | orchestrator | 2026-03-05 00:47:35.175098 | orchestrator | 2026-03-05 00:47:35.175104 | orchestrator | 2026-03-05 00:47:35.175111 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:47:35.175118 | orchestrator | Thursday 05 March 2026 00:47:35 +0000 (0:00:00.163) 0:01:19.483 ******** 2026-03-05 00:47:35.175125 | orchestrator | =============================================================================== 2026-03-05 00:47:35.175131 | orchestrator | Create block VGs -------------------------------------------------------- 5.90s 2026-03-05 00:47:35.175138 | orchestrator | Create block LVs -------------------------------------------------------- 4.43s 2026-03-05 00:47:35.175146 | orchestrator | Add known partitions to the list of available block devices ------------- 1.97s 2026-03-05 00:47:35.175152 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.97s 2026-03-05 00:47:35.175159 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.89s 2026-03-05 00:47:35.175166 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.67s 2026-03-05 00:47:35.175173 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.65s 2026-03-05 00:47:35.175180 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.62s 2026-03-05 00:47:35.175196 | orchestrator | Add known links to the list of available block devices ------------------ 1.54s 2026-03-05 00:47:35.633585 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2026-03-05 00:47:35.633644 | orchestrator | Print LVM report data --------------------------------------------------- 1.00s 2026-03-05 00:47:35.633651 | orchestrator | Add known links to the list of available block devices ------------------ 1.00s 2026-03-05 00:47:35.633656 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-03-05 00:47:35.633661 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-05 00:47:35.633665 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.87s 2026-03-05 00:47:35.633670 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.86s 2026-03-05 00:47:35.633675 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-03-05 00:47:35.633680 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-03-05 00:47:35.633684 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.75s 2026-03-05 00:47:35.633689 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.74s 2026-03-05 00:47:48.021805 | orchestrator | 2026-03-05 00:47:48 | INFO  | Task 16379887-0175-44f6-8dd1-23f2f1254d86 (facts) was prepared for execution. 2026-03-05 00:47:48.021988 | orchestrator | 2026-03-05 00:47:48 | INFO  | It takes a moment until task 16379887-0175-44f6-8dd1-23f2f1254d86 (facts) has been started and output is visible here. 2026-03-05 00:48:02.068785 | orchestrator | 2026-03-05 00:48:02.068988 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-05 00:48:02.069017 | orchestrator | 2026-03-05 00:48:02.069049 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-05 00:48:02.069069 | orchestrator | Thursday 05 March 2026 00:47:52 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-03-05 00:48:02.069122 | orchestrator | ok: [testbed-manager] 2026-03-05 00:48:02.069144 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:48:02.069163 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:48:02.069181 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:48:02.069200 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:48:02.069217 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:48:02.069235 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:48:02.069254 | orchestrator | 2026-03-05 00:48:02.069273 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-05 00:48:02.069291 | orchestrator | Thursday 05 March 2026 00:47:53 +0000 (0:00:01.200) 0:00:01.467 ******** 2026-03-05 00:48:02.069312 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:48:02.069333 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:48:02.069352 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:48:02.069371 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:48:02.069390 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:48:02.069410 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:48:02.069429 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:48:02.069447 | orchestrator | 2026-03-05 00:48:02.069467 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-05 00:48:02.069485 | orchestrator | 2026-03-05 00:48:02.069503 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-05 00:48:02.069522 | orchestrator | Thursday 05 March 2026 00:47:55 +0000 (0:00:01.333) 0:00:02.800 ******** 2026-03-05 00:48:02.069540 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:48:02.069560 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:48:02.069578 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:48:02.069597 | orchestrator | ok: [testbed-manager] 2026-03-05 00:48:02.069610 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:48:02.069621 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:48:02.069632 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:48:02.069643 | orchestrator | 2026-03-05 00:48:02.069654 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-05 00:48:02.069665 | orchestrator | 2026-03-05 00:48:02.069676 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-05 00:48:02.069688 | orchestrator | Thursday 05 March 2026 00:48:01 +0000 (0:00:05.857) 0:00:08.658 ******** 2026-03-05 00:48:02.069699 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:48:02.069710 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:48:02.069722 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:48:02.069733 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:48:02.069744 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:48:02.069755 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:48:02.069766 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:48:02.069777 | orchestrator | 2026-03-05 00:48:02.069789 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:48:02.069800 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:48:02.069813 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:48:02.069824 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:48:02.069901 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:48:02.069916 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:48:02.069927 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:48:02.069938 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:48:02.069965 | orchestrator | 2026-03-05 00:48:02.069976 | orchestrator | 2026-03-05 00:48:02.069987 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:48:02.069998 | orchestrator | Thursday 05 March 2026 00:48:01 +0000 (0:00:00.529) 0:00:09.187 ******** 2026-03-05 00:48:02.070009 | orchestrator | =============================================================================== 2026-03-05 00:48:02.070111 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.86s 2026-03-05 00:48:02.070126 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-03-05 00:48:02.070136 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.20s 2026-03-05 00:48:02.070146 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-03-05 00:48:14.609492 | orchestrator | 2026-03-05 00:48:14 | INFO  | Task f39f9d82-b198-4094-96c5-e91c3275d2ac (frr) was prepared for execution. 2026-03-05 00:48:14.609563 | orchestrator | 2026-03-05 00:48:14 | INFO  | It takes a moment until task f39f9d82-b198-4094-96c5-e91c3275d2ac (frr) has been started and output is visible here. 2026-03-05 00:48:42.368849 | orchestrator | 2026-03-05 00:48:42.368989 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-05 00:48:42.369007 | orchestrator | 2026-03-05 00:48:42.369019 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-05 00:48:42.369053 | orchestrator | Thursday 05 March 2026 00:48:18 +0000 (0:00:00.253) 0:00:00.254 ******** 2026-03-05 00:48:42.369066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:48:42.369079 | orchestrator | 2026-03-05 00:48:42.369098 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-05 00:48:42.369117 | orchestrator | Thursday 05 March 2026 00:48:19 +0000 (0:00:00.237) 0:00:00.491 ******** 2026-03-05 00:48:42.369136 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:42.369155 | orchestrator | 2026-03-05 00:48:42.369172 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-05 00:48:42.369191 | orchestrator | Thursday 05 March 2026 00:48:20 +0000 (0:00:01.238) 0:00:01.729 ******** 2026-03-05 00:48:42.369216 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:42.369235 | orchestrator | 2026-03-05 00:48:42.369252 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-05 00:48:42.369271 | orchestrator | Thursday 05 March 2026 00:48:31 +0000 (0:00:10.663) 0:00:12.392 ******** 2026-03-05 00:48:42.369289 | orchestrator | ok: [testbed-manager] 2026-03-05 00:48:42.369309 | orchestrator | 2026-03-05 00:48:42.369328 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-05 00:48:42.369347 | orchestrator | Thursday 05 March 2026 00:48:32 +0000 (0:00:01.032) 0:00:13.425 ******** 2026-03-05 00:48:42.369368 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:42.369388 | orchestrator | 2026-03-05 00:48:42.369408 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-05 00:48:42.369421 | orchestrator | Thursday 05 March 2026 00:48:33 +0000 (0:00:00.961) 0:00:14.386 ******** 2026-03-05 00:48:42.369434 | orchestrator | ok: [testbed-manager] 2026-03-05 00:48:42.369448 | orchestrator | 2026-03-05 00:48:42.369460 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-05 00:48:42.369473 | orchestrator | Thursday 05 March 2026 00:48:34 +0000 (0:00:01.252) 0:00:15.639 ******** 2026-03-05 00:48:42.369484 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:48:42.369495 | orchestrator | 2026-03-05 00:48:42.369506 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-05 00:48:42.369517 | orchestrator | Thursday 05 March 2026 00:48:34 +0000 (0:00:00.189) 0:00:15.828 ******** 2026-03-05 00:48:42.369530 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:48:42.369562 | orchestrator | 2026-03-05 00:48:42.369574 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-05 00:48:42.369585 | orchestrator | Thursday 05 March 2026 00:48:34 +0000 (0:00:00.177) 0:00:16.006 ******** 2026-03-05 00:48:42.369596 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:42.369607 | orchestrator | 2026-03-05 00:48:42.369618 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-05 00:48:42.369629 | orchestrator | Thursday 05 March 2026 00:48:35 +0000 (0:00:01.025) 0:00:17.031 ******** 2026-03-05 00:48:42.369640 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-05 00:48:42.369651 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-05 00:48:42.369663 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-05 00:48:42.369674 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-05 00:48:42.369686 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-05 00:48:42.369697 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-05 00:48:42.369708 | orchestrator | 2026-03-05 00:48:42.369719 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-05 00:48:42.369730 | orchestrator | Thursday 05 March 2026 00:48:38 +0000 (0:00:03.310) 0:00:20.341 ******** 2026-03-05 00:48:42.369741 | orchestrator | ok: [testbed-manager] 2026-03-05 00:48:42.369752 | orchestrator | 2026-03-05 00:48:42.369764 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-05 00:48:42.369774 | orchestrator | Thursday 05 March 2026 00:48:40 +0000 (0:00:01.709) 0:00:22.051 ******** 2026-03-05 00:48:42.369786 | orchestrator | changed: [testbed-manager] 2026-03-05 00:48:42.369797 | orchestrator | 2026-03-05 00:48:42.369834 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:48:42.369846 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:48:42.369857 | orchestrator | 2026-03-05 00:48:42.369869 | orchestrator | 2026-03-05 00:48:42.369880 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:48:42.369891 | orchestrator | Thursday 05 March 2026 00:48:42 +0000 (0:00:01.409) 0:00:23.461 ******** 2026-03-05 00:48:42.369902 | orchestrator | =============================================================================== 2026-03-05 00:48:42.369913 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.66s 2026-03-05 00:48:42.369924 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.31s 2026-03-05 00:48:42.369935 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.71s 2026-03-05 00:48:42.369946 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.41s 2026-03-05 00:48:42.369957 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.25s 2026-03-05 00:48:42.369990 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.24s 2026-03-05 00:48:42.370002 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.03s 2026-03-05 00:48:42.370070 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.03s 2026-03-05 00:48:42.370082 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.96s 2026-03-05 00:48:42.370094 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2026-03-05 00:48:42.370105 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.19s 2026-03-05 00:48:42.370116 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-03-05 00:48:42.671091 | orchestrator | 2026-03-05 00:48:42.673617 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Mar 5 00:48:42 UTC 2026 2026-03-05 00:48:42.673682 | orchestrator | 2026-03-05 00:48:44.661338 | orchestrator | 2026-03-05 00:48:44 | INFO  | Collection nutshell is prepared for execution 2026-03-05 00:48:44.661464 | orchestrator | 2026-03-05 00:48:44 | INFO  | A [0] - dotfiles 2026-03-05 00:48:54.672447 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [0] - homer 2026-03-05 00:48:54.672580 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [0] - netdata 2026-03-05 00:48:54.672591 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [0] - openstackclient 2026-03-05 00:48:54.672606 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [0] - phpmyadmin 2026-03-05 00:48:54.672612 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [0] - common 2026-03-05 00:48:54.677531 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [1] -- loadbalancer 2026-03-05 00:48:54.677629 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [2] --- opensearch 2026-03-05 00:48:54.677638 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [2] --- mariadb-ng 2026-03-05 00:48:54.677826 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [3] ---- horizon 2026-03-05 00:48:54.677838 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [3] ---- keystone 2026-03-05 00:48:54.678477 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [4] ----- neutron 2026-03-05 00:48:54.678605 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [5] ------ wait-for-nova 2026-03-05 00:48:54.678618 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [6] ------- octavia 2026-03-05 00:48:54.680729 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [4] ----- barbican 2026-03-05 00:48:54.680775 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [4] ----- designate 2026-03-05 00:48:54.680904 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [4] ----- ironic 2026-03-05 00:48:54.681194 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [4] ----- placement 2026-03-05 00:48:54.681215 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [4] ----- magnum 2026-03-05 00:48:54.682431 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [1] -- openvswitch 2026-03-05 00:48:54.682542 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [2] --- ovn 2026-03-05 00:48:54.682942 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [1] -- memcached 2026-03-05 00:48:54.683098 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [1] -- redis 2026-03-05 00:48:54.683339 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [1] -- rabbitmq-ng 2026-03-05 00:48:54.685270 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [0] - kubernetes 2026-03-05 00:48:54.687162 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [1] -- kubeconfig 2026-03-05 00:48:54.687368 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [1] -- copy-kubeconfig 2026-03-05 00:48:54.687709 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [0] - ceph 2026-03-05 00:48:54.691475 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [1] -- ceph-pools 2026-03-05 00:48:54.691522 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [2] --- copy-ceph-keys 2026-03-05 00:48:54.691928 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [3] ---- cephclient 2026-03-05 00:48:54.691957 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-05 00:48:54.691968 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [4] ----- wait-for-keystone 2026-03-05 00:48:54.691975 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-05 00:48:54.691980 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [5] ------ glance 2026-03-05 00:48:54.692210 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [5] ------ cinder 2026-03-05 00:48:54.692855 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [5] ------ nova 2026-03-05 00:48:54.692916 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [4] ----- prometheus 2026-03-05 00:48:54.693207 | orchestrator | 2026-03-05 00:48:54 | INFO  | A [5] ------ grafana 2026-03-05 00:48:54.922624 | orchestrator | 2026-03-05 00:48:54 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-05 00:48:54.924879 | orchestrator | 2026-03-05 00:48:54 | INFO  | Tasks are running in the background 2026-03-05 00:48:58.181672 | orchestrator | 2026-03-05 00:48:58 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-05 00:49:00.325835 | orchestrator | 2026-03-05 00:49:00 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:00.325932 | orchestrator | 2026-03-05 00:49:00 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state STARTED 2026-03-05 00:49:00.326302 | orchestrator | 2026-03-05 00:49:00 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:00.326957 | orchestrator | 2026-03-05 00:49:00 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:00.328087 | orchestrator | 2026-03-05 00:49:00 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:00.330423 | orchestrator | 2026-03-05 00:49:00 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:00.331230 | orchestrator | 2026-03-05 00:49:00 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:00.331292 | orchestrator | 2026-03-05 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:03.441355 | orchestrator | 2026-03-05 00:49:03 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:03.441454 | orchestrator | 2026-03-05 00:49:03 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state STARTED 2026-03-05 00:49:03.441464 | orchestrator | 2026-03-05 00:49:03 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:03.441471 | orchestrator | 2026-03-05 00:49:03 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:03.441478 | orchestrator | 2026-03-05 00:49:03 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:03.441485 | orchestrator | 2026-03-05 00:49:03 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:03.441491 | orchestrator | 2026-03-05 00:49:03 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:03.441499 | orchestrator | 2026-03-05 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:06.449432 | orchestrator | 2026-03-05 00:49:06 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:06.449694 | orchestrator | 2026-03-05 00:49:06 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state STARTED 2026-03-05 00:49:06.449746 | orchestrator | 2026-03-05 00:49:06 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:06.449768 | orchestrator | 2026-03-05 00:49:06 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:06.450337 | orchestrator | 2026-03-05 00:49:06 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:06.450707 | orchestrator | 2026-03-05 00:49:06 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:06.451131 | orchestrator | 2026-03-05 00:49:06 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:06.451187 | orchestrator | 2026-03-05 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:09.506276 | orchestrator | 2026-03-05 00:49:09 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:09.506368 | orchestrator | 2026-03-05 00:49:09 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state STARTED 2026-03-05 00:49:09.506381 | orchestrator | 2026-03-05 00:49:09 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:09.506391 | orchestrator | 2026-03-05 00:49:09 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:09.506400 | orchestrator | 2026-03-05 00:49:09 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:09.506409 | orchestrator | 2026-03-05 00:49:09 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:09.506418 | orchestrator | 2026-03-05 00:49:09 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:09.506427 | orchestrator | 2026-03-05 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:12.615445 | orchestrator | 2026-03-05 00:49:12 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:12.615542 | orchestrator | 2026-03-05 00:49:12 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state STARTED 2026-03-05 00:49:12.615551 | orchestrator | 2026-03-05 00:49:12 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:12.615558 | orchestrator | 2026-03-05 00:49:12 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:12.615565 | orchestrator | 2026-03-05 00:49:12 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:12.615571 | orchestrator | 2026-03-05 00:49:12 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:12.615578 | orchestrator | 2026-03-05 00:49:12 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:12.615584 | orchestrator | 2026-03-05 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:15.680871 | orchestrator | 2026-03-05 00:49:15 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:15.682483 | orchestrator | 2026-03-05 00:49:15 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state STARTED 2026-03-05 00:49:15.685592 | orchestrator | 2026-03-05 00:49:15 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:15.686218 | orchestrator | 2026-03-05 00:49:15 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:15.687367 | orchestrator | 2026-03-05 00:49:15 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:15.692191 | orchestrator | 2026-03-05 00:49:15 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:15.694144 | orchestrator | 2026-03-05 00:49:15 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:15.694188 | orchestrator | 2026-03-05 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:19.239331 | orchestrator | 2026-03-05 00:49:19 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:19.239424 | orchestrator | 2026-03-05 00:49:19 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state STARTED 2026-03-05 00:49:19.239432 | orchestrator | 2026-03-05 00:49:19 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:19.239460 | orchestrator | 2026-03-05 00:49:19 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:19.239466 | orchestrator | 2026-03-05 00:49:19 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:19.239471 | orchestrator | 2026-03-05 00:49:19 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:19.239476 | orchestrator | 2026-03-05 00:49:19 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:19.239491 | orchestrator | 2026-03-05 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:22.364657 | orchestrator | 2026-03-05 00:49:22 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:22.368182 | orchestrator | 2026-03-05 00:49:22 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state STARTED 2026-03-05 00:49:22.373325 | orchestrator | 2026-03-05 00:49:22 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:22.377909 | orchestrator | 2026-03-05 00:49:22 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:22.382975 | orchestrator | 2026-03-05 00:49:22 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:22.383672 | orchestrator | 2026-03-05 00:49:22 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:22.387807 | orchestrator | 2026-03-05 00:49:22 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:22.387868 | orchestrator | 2026-03-05 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:25.866520 | orchestrator | 2026-03-05 00:49:25 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:25.866627 | orchestrator | 2026-03-05 00:49:25 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state STARTED 2026-03-05 00:49:25.866641 | orchestrator | 2026-03-05 00:49:25 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:25.866651 | orchestrator | 2026-03-05 00:49:25 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:25.866661 | orchestrator | 2026-03-05 00:49:25 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:25.866671 | orchestrator | 2026-03-05 00:49:25 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:25.866681 | orchestrator | 2026-03-05 00:49:25 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:25.866691 | orchestrator | 2026-03-05 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:29.001663 | orchestrator | 2026-03-05 00:49:29.001846 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-05 00:49:29.001863 | orchestrator | 2026-03-05 00:49:29.001872 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-05 00:49:29.001879 | orchestrator | Thursday 05 March 2026 00:49:10 +0000 (0:00:01.041) 0:00:01.041 ******** 2026-03-05 00:49:29.001886 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:49:29.001894 | orchestrator | changed: [testbed-manager] 2026-03-05 00:49:29.001901 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:49:29.001908 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:49:29.001914 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:49:29.001921 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:49:29.001927 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:49:29.001933 | orchestrator | 2026-03-05 00:49:29.001940 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-05 00:49:29.001967 | orchestrator | Thursday 05 March 2026 00:49:15 +0000 (0:00:04.417) 0:00:05.459 ******** 2026-03-05 00:49:29.001973 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-05 00:49:29.001978 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-05 00:49:29.001982 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-05 00:49:29.001986 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-05 00:49:29.001990 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-05 00:49:29.001994 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-05 00:49:29.001998 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-05 00:49:29.002002 | orchestrator | 2026-03-05 00:49:29.002006 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-05 00:49:29.002047 | orchestrator | Thursday 05 March 2026 00:49:17 +0000 (0:00:02.697) 0:00:08.156 ******** 2026-03-05 00:49:29.002056 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:49:16.632807', 'end': '2026-03-05 00:49:16.642040', 'delta': '0:00:00.009233', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:49:29.002069 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:49:16.595786', 'end': '2026-03-05 00:49:16.604481', 'delta': '0:00:00.008695', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:49:29.002255 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:49:16.607740', 'end': '2026-03-05 00:49:16.615643', 'delta': '0:00:00.007903', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:49:29.002282 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:49:16.780385', 'end': '2026-03-05 00:49:16.787972', 'delta': '0:00:00.007587', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:49:29.002299 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:49:17.534113', 'end': '2026-03-05 00:49:17.541650', 'delta': '0:00:00.007537', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:49:29.002307 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:49:16.778637', 'end': '2026-03-05 00:49:16.785847', 'delta': '0:00:00.007210', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:49:29.002312 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-05 00:49:17.273647', 'end': '2026-03-05 00:49:17.282964', 'delta': '0:00:00.009317', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-05 00:49:29.002317 | orchestrator | 2026-03-05 00:49:29.002322 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-05 00:49:29.002327 | orchestrator | Thursday 05 March 2026 00:49:20 +0000 (0:00:02.556) 0:00:10.713 ******** 2026-03-05 00:49:29.002331 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-05 00:49:29.002336 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-05 00:49:29.002341 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-05 00:49:29.002345 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-05 00:49:29.002350 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-05 00:49:29.002354 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-05 00:49:29.002359 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-05 00:49:29.002364 | orchestrator | 2026-03-05 00:49:29.002368 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-05 00:49:29.002373 | orchestrator | Thursday 05 March 2026 00:49:24 +0000 (0:00:03.881) 0:00:14.594 ******** 2026-03-05 00:49:29.002378 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-05 00:49:29.002383 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-05 00:49:29.002387 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-05 00:49:29.002396 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-05 00:49:29.002400 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-05 00:49:29.002405 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-05 00:49:29.002410 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-05 00:49:29.002414 | orchestrator | 2026-03-05 00:49:29.002419 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:49:29.002432 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:49:29.002440 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:49:29.002447 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:49:29.002453 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:49:29.002459 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:49:29.002469 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:49:29.002478 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:49:29.002483 | orchestrator | 2026-03-05 00:49:29.002489 | orchestrator | 2026-03-05 00:49:29.002495 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:49:29.002501 | orchestrator | Thursday 05 March 2026 00:49:26 +0000 (0:00:02.624) 0:00:17.219 ******** 2026-03-05 00:49:29.002508 | orchestrator | =============================================================================== 2026-03-05 00:49:29.002514 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.42s 2026-03-05 00:49:29.002520 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.88s 2026-03-05 00:49:29.002526 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.70s 2026-03-05 00:49:29.002532 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.62s 2026-03-05 00:49:29.002538 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.56s 2026-03-05 00:49:29.002549 | orchestrator | 2026-03-05 00:49:28 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:29.002555 | orchestrator | 2026-03-05 00:49:28 | INFO  | Task 81ce8f6c-4cb0-4e45-80f5-58c22416f57c is in state SUCCESS 2026-03-05 00:49:29.002562 | orchestrator | 2026-03-05 00:49:28 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:29.002568 | orchestrator | 2026-03-05 00:49:28 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:29.002573 | orchestrator | 2026-03-05 00:49:28 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:29.002579 | orchestrator | 2026-03-05 00:49:28 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:29.002585 | orchestrator | 2026-03-05 00:49:28 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:29.002589 | orchestrator | 2026-03-05 00:49:28 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:29.002593 | orchestrator | 2026-03-05 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:31.998525 | orchestrator | 2026-03-05 00:49:31 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:31.998620 | orchestrator | 2026-03-05 00:49:31 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:31.998630 | orchestrator | 2026-03-05 00:49:31 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:31.998637 | orchestrator | 2026-03-05 00:49:31 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:31.998644 | orchestrator | 2026-03-05 00:49:31 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:31.998651 | orchestrator | 2026-03-05 00:49:31 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:31.998658 | orchestrator | 2026-03-05 00:49:32 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:31.998665 | orchestrator | 2026-03-05 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:35.034874 | orchestrator | 2026-03-05 00:49:35 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:35.037363 | orchestrator | 2026-03-05 00:49:35 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:35.037425 | orchestrator | 2026-03-05 00:49:35 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:35.038388 | orchestrator | 2026-03-05 00:49:35 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:35.038410 | orchestrator | 2026-03-05 00:49:35 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:35.039532 | orchestrator | 2026-03-05 00:49:35 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:35.040991 | orchestrator | 2026-03-05 00:49:35 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:35.041053 | orchestrator | 2026-03-05 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:38.113107 | orchestrator | 2026-03-05 00:49:38 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:38.115988 | orchestrator | 2026-03-05 00:49:38 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:38.119774 | orchestrator | 2026-03-05 00:49:38 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:38.125282 | orchestrator | 2026-03-05 00:49:38 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:38.130228 | orchestrator | 2026-03-05 00:49:38 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:38.135730 | orchestrator | 2026-03-05 00:49:38 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:38.140593 | orchestrator | 2026-03-05 00:49:38 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:38.142097 | orchestrator | 2026-03-05 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:41.235295 | orchestrator | 2026-03-05 00:49:41 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:41.238989 | orchestrator | 2026-03-05 00:49:41 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:41.240498 | orchestrator | 2026-03-05 00:49:41 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:41.242981 | orchestrator | 2026-03-05 00:49:41 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:41.249326 | orchestrator | 2026-03-05 00:49:41 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:41.255228 | orchestrator | 2026-03-05 00:49:41 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:41.259979 | orchestrator | 2026-03-05 00:49:41 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:41.260068 | orchestrator | 2026-03-05 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:44.350604 | orchestrator | 2026-03-05 00:49:44 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:44.351539 | orchestrator | 2026-03-05 00:49:44 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:44.352989 | orchestrator | 2026-03-05 00:49:44 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:44.353820 | orchestrator | 2026-03-05 00:49:44 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:44.355077 | orchestrator | 2026-03-05 00:49:44 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:44.356862 | orchestrator | 2026-03-05 00:49:44 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:44.360413 | orchestrator | 2026-03-05 00:49:44 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:44.360482 | orchestrator | 2026-03-05 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:47.779244 | orchestrator | 2026-03-05 00:49:47 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:47.779337 | orchestrator | 2026-03-05 00:49:47 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:47.779347 | orchestrator | 2026-03-05 00:49:47 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:47.779375 | orchestrator | 2026-03-05 00:49:47 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:47.779380 | orchestrator | 2026-03-05 00:49:47 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:47.779385 | orchestrator | 2026-03-05 00:49:47 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:47.779390 | orchestrator | 2026-03-05 00:49:47 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:47.779395 | orchestrator | 2026-03-05 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:50.609645 | orchestrator | 2026-03-05 00:49:50 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:50.609720 | orchestrator | 2026-03-05 00:49:50 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:50.609732 | orchestrator | 2026-03-05 00:49:50 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:50.609773 | orchestrator | 2026-03-05 00:49:50 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:50.609785 | orchestrator | 2026-03-05 00:49:50 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:50.610717 | orchestrator | 2026-03-05 00:49:50 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:50.613105 | orchestrator | 2026-03-05 00:49:50 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:50.613158 | orchestrator | 2026-03-05 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:53.678407 | orchestrator | 2026-03-05 00:49:53 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:53.680465 | orchestrator | 2026-03-05 00:49:53 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:53.684677 | orchestrator | 2026-03-05 00:49:53 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:53.688346 | orchestrator | 2026-03-05 00:49:53 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:53.692775 | orchestrator | 2026-03-05 00:49:53 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:53.715786 | orchestrator | 2026-03-05 00:49:53 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:53.719492 | orchestrator | 2026-03-05 00:49:53 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:53.719548 | orchestrator | 2026-03-05 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:56.896009 | orchestrator | 2026-03-05 00:49:56 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:56.896123 | orchestrator | 2026-03-05 00:49:56 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:56.896140 | orchestrator | 2026-03-05 00:49:56 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:56.896153 | orchestrator | 2026-03-05 00:49:56 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:56.896165 | orchestrator | 2026-03-05 00:49:56 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:56.896176 | orchestrator | 2026-03-05 00:49:56 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:56.896188 | orchestrator | 2026-03-05 00:49:56 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:56.896199 | orchestrator | 2026-03-05 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:49:59.839528 | orchestrator | 2026-03-05 00:49:59 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:49:59.839616 | orchestrator | 2026-03-05 00:49:59 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:49:59.839622 | orchestrator | 2026-03-05 00:49:59 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:49:59.839627 | orchestrator | 2026-03-05 00:49:59 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:49:59.839631 | orchestrator | 2026-03-05 00:49:59 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:49:59.839635 | orchestrator | 2026-03-05 00:49:59 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:49:59.839639 | orchestrator | 2026-03-05 00:49:59 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state STARTED 2026-03-05 00:49:59.839643 | orchestrator | 2026-03-05 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:03.216295 | orchestrator | 2026-03-05 00:50:02 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:03.216372 | orchestrator | 2026-03-05 00:50:02 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:50:03.216379 | orchestrator | 2026-03-05 00:50:02 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:03.216383 | orchestrator | 2026-03-05 00:50:02 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:03.216387 | orchestrator | 2026-03-05 00:50:02 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:03.216409 | orchestrator | 2026-03-05 00:50:02 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:03.216413 | orchestrator | 2026-03-05 00:50:02 | INFO  | Task 14be7985-99e0-4374-9c6f-ebe74e622898 is in state SUCCESS 2026-03-05 00:50:03.216417 | orchestrator | 2026-03-05 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:06.046649 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:06.046742 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:50:06.046750 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:06.046755 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:06.047308 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:06.054300 | orchestrator | 2026-03-05 00:50:06 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:06.054370 | orchestrator | 2026-03-05 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:09.135615 | orchestrator | 2026-03-05 00:50:09 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:09.136142 | orchestrator | 2026-03-05 00:50:09 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state STARTED 2026-03-05 00:50:09.137054 | orchestrator | 2026-03-05 00:50:09 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:09.137627 | orchestrator | 2026-03-05 00:50:09 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:09.138926 | orchestrator | 2026-03-05 00:50:09 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:09.139713 | orchestrator | 2026-03-05 00:50:09 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:09.139925 | orchestrator | 2026-03-05 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:12.190609 | orchestrator | 2026-03-05 00:50:12 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:12.191016 | orchestrator | 2026-03-05 00:50:12 | INFO  | Task 7af6cb7e-a87c-4029-8cff-629e0fa56bfc is in state SUCCESS 2026-03-05 00:50:12.194374 | orchestrator | 2026-03-05 00:50:12 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:12.195554 | orchestrator | 2026-03-05 00:50:12 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:12.197323 | orchestrator | 2026-03-05 00:50:12 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:12.198765 | orchestrator | 2026-03-05 00:50:12 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:12.198808 | orchestrator | 2026-03-05 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:15.256355 | orchestrator | 2026-03-05 00:50:15 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:15.258915 | orchestrator | 2026-03-05 00:50:15 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:15.261813 | orchestrator | 2026-03-05 00:50:15 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:15.264946 | orchestrator | 2026-03-05 00:50:15 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:15.267768 | orchestrator | 2026-03-05 00:50:15 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:15.267900 | orchestrator | 2026-03-05 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:18.314238 | orchestrator | 2026-03-05 00:50:18 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:18.314888 | orchestrator | 2026-03-05 00:50:18 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:18.315863 | orchestrator | 2026-03-05 00:50:18 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:18.317234 | orchestrator | 2026-03-05 00:50:18 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:18.318137 | orchestrator | 2026-03-05 00:50:18 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:18.318176 | orchestrator | 2026-03-05 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:21.367481 | orchestrator | 2026-03-05 00:50:21 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:21.367548 | orchestrator | 2026-03-05 00:50:21 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:21.367558 | orchestrator | 2026-03-05 00:50:21 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:21.368059 | orchestrator | 2026-03-05 00:50:21 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:21.370401 | orchestrator | 2026-03-05 00:50:21 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:21.370511 | orchestrator | 2026-03-05 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:24.647287 | orchestrator | 2026-03-05 00:50:24 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:24.652467 | orchestrator | 2026-03-05 00:50:24 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:24.656457 | orchestrator | 2026-03-05 00:50:24 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:24.656568 | orchestrator | 2026-03-05 00:50:24 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:24.660883 | orchestrator | 2026-03-05 00:50:24 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:24.660937 | orchestrator | 2026-03-05 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:27.749554 | orchestrator | 2026-03-05 00:50:27 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:27.751675 | orchestrator | 2026-03-05 00:50:27 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:27.755017 | orchestrator | 2026-03-05 00:50:27 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:27.757725 | orchestrator | 2026-03-05 00:50:27 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:27.759548 | orchestrator | 2026-03-05 00:50:27 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:27.759583 | orchestrator | 2026-03-05 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:30.821374 | orchestrator | 2026-03-05 00:50:30 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:30.827902 | orchestrator | 2026-03-05 00:50:30 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:30.832792 | orchestrator | 2026-03-05 00:50:30 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:30.835496 | orchestrator | 2026-03-05 00:50:30 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:30.841215 | orchestrator | 2026-03-05 00:50:30 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:30.843279 | orchestrator | 2026-03-05 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:34.183448 | orchestrator | 2026-03-05 00:50:34 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:34.183523 | orchestrator | 2026-03-05 00:50:34 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:34.183529 | orchestrator | 2026-03-05 00:50:34 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:34.183534 | orchestrator | 2026-03-05 00:50:34 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:34.183538 | orchestrator | 2026-03-05 00:50:34 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:34.183542 | orchestrator | 2026-03-05 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:37.161765 | orchestrator | 2026-03-05 00:50:37 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:37.164842 | orchestrator | 2026-03-05 00:50:37 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:37.165971 | orchestrator | 2026-03-05 00:50:37 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:37.191322 | orchestrator | 2026-03-05 00:50:37 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:37.191404 | orchestrator | 2026-03-05 00:50:37 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:37.191412 | orchestrator | 2026-03-05 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:40.272460 | orchestrator | 2026-03-05 00:50:40 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:40.275213 | orchestrator | 2026-03-05 00:50:40 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state STARTED 2026-03-05 00:50:40.278755 | orchestrator | 2026-03-05 00:50:40 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:40.284629 | orchestrator | 2026-03-05 00:50:40 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:40.284999 | orchestrator | 2026-03-05 00:50:40 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:40.285059 | orchestrator | 2026-03-05 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:43.350595 | orchestrator | 2026-03-05 00:50:43 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:43.350793 | orchestrator | 2026-03-05 00:50:43 | INFO  | Task 61c459e1-3321-47c0-b687-7d20cadb70de is in state SUCCESS 2026-03-05 00:50:43.353853 | orchestrator | 2026-03-05 00:50:43.353910 | orchestrator | 2026-03-05 00:50:43.353918 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-05 00:50:43.353926 | orchestrator | 2026-03-05 00:50:43.353933 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-05 00:50:43.353957 | orchestrator | Thursday 05 March 2026 00:49:09 +0000 (0:00:01.154) 0:00:01.154 ******** 2026-03-05 00:50:43.353964 | orchestrator | ok: [testbed-manager] => { 2026-03-05 00:50:43.353973 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-05 00:50:43.354000 | orchestrator | } 2026-03-05 00:50:43.354007 | orchestrator | 2026-03-05 00:50:43.354053 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-05 00:50:43.354060 | orchestrator | Thursday 05 March 2026 00:49:10 +0000 (0:00:00.536) 0:00:01.690 ******** 2026-03-05 00:50:43.354067 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:43.354074 | orchestrator | 2026-03-05 00:50:43.354081 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-05 00:50:43.354088 | orchestrator | Thursday 05 March 2026 00:49:12 +0000 (0:00:01.760) 0:00:03.450 ******** 2026-03-05 00:50:43.354095 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-05 00:50:43.354102 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-05 00:50:43.354109 | orchestrator | 2026-03-05 00:50:43.354116 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-05 00:50:43.354122 | orchestrator | Thursday 05 March 2026 00:49:14 +0000 (0:00:01.903) 0:00:05.354 ******** 2026-03-05 00:50:43.354129 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.354135 | orchestrator | 2026-03-05 00:50:43.354142 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-05 00:50:43.354148 | orchestrator | Thursday 05 March 2026 00:49:19 +0000 (0:00:05.800) 0:00:11.154 ******** 2026-03-05 00:50:43.354155 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.354161 | orchestrator | 2026-03-05 00:50:43.354168 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-05 00:50:43.354174 | orchestrator | Thursday 05 March 2026 00:49:22 +0000 (0:00:02.372) 0:00:13.527 ******** 2026-03-05 00:50:43.354181 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-05 00:50:43.354187 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:43.354194 | orchestrator | 2026-03-05 00:50:43.354200 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-05 00:50:43.354207 | orchestrator | Thursday 05 March 2026 00:49:55 +0000 (0:00:33.208) 0:00:46.736 ******** 2026-03-05 00:50:43.354213 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.354220 | orchestrator | 2026-03-05 00:50:43.354226 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:50:43.354233 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:50:43.354242 | orchestrator | 2026-03-05 00:50:43.354248 | orchestrator | 2026-03-05 00:50:43.354255 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:50:43.354261 | orchestrator | Thursday 05 March 2026 00:50:00 +0000 (0:00:04.861) 0:00:51.598 ******** 2026-03-05 00:50:43.354268 | orchestrator | =============================================================================== 2026-03-05 00:50:43.354274 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 33.21s 2026-03-05 00:50:43.354281 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 5.80s 2026-03-05 00:50:43.354287 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.86s 2026-03-05 00:50:43.354294 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.37s 2026-03-05 00:50:43.354300 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.90s 2026-03-05 00:50:43.354307 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.76s 2026-03-05 00:50:43.354313 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.54s 2026-03-05 00:50:43.354320 | orchestrator | 2026-03-05 00:50:43.354326 | orchestrator | 2026-03-05 00:50:43.354333 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-05 00:50:43.354339 | orchestrator | 2026-03-05 00:50:43.354346 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-05 00:50:43.354352 | orchestrator | Thursday 05 March 2026 00:49:09 +0000 (0:00:00.786) 0:00:00.787 ******** 2026-03-05 00:50:43.354363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-05 00:50:43.354371 | orchestrator | 2026-03-05 00:50:43.354378 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-05 00:50:43.354384 | orchestrator | Thursday 05 March 2026 00:49:10 +0000 (0:00:00.788) 0:00:01.582 ******** 2026-03-05 00:50:43.354391 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-05 00:50:43.354397 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-05 00:50:43.354404 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-05 00:50:43.354411 | orchestrator | 2026-03-05 00:50:43.354417 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-05 00:50:43.354424 | orchestrator | Thursday 05 March 2026 00:49:12 +0000 (0:00:02.081) 0:00:03.663 ******** 2026-03-05 00:50:43.354430 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.354437 | orchestrator | 2026-03-05 00:50:43.354444 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-05 00:50:43.354452 | orchestrator | Thursday 05 March 2026 00:49:16 +0000 (0:00:04.445) 0:00:08.109 ******** 2026-03-05 00:50:43.354471 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-05 00:50:43.354479 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:43.354486 | orchestrator | 2026-03-05 00:50:43.354493 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-05 00:50:43.354500 | orchestrator | Thursday 05 March 2026 00:49:55 +0000 (0:00:38.428) 0:00:46.537 ******** 2026-03-05 00:50:43.354511 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.354518 | orchestrator | 2026-03-05 00:50:43.354525 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-05 00:50:43.354532 | orchestrator | Thursday 05 March 2026 00:49:59 +0000 (0:00:04.629) 0:00:51.166 ******** 2026-03-05 00:50:43.354539 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:43.354546 | orchestrator | 2026-03-05 00:50:43.354553 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-05 00:50:43.354560 | orchestrator | Thursday 05 March 2026 00:50:01 +0000 (0:00:01.348) 0:00:52.515 ******** 2026-03-05 00:50:43.354567 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.354574 | orchestrator | 2026-03-05 00:50:43.354581 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-05 00:50:43.354589 | orchestrator | Thursday 05 March 2026 00:50:06 +0000 (0:00:04.991) 0:00:57.507 ******** 2026-03-05 00:50:43.354595 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.354602 | orchestrator | 2026-03-05 00:50:43.354610 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-05 00:50:43.354617 | orchestrator | Thursday 05 March 2026 00:50:07 +0000 (0:00:01.272) 0:00:58.779 ******** 2026-03-05 00:50:43.354624 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.354631 | orchestrator | 2026-03-05 00:50:43.354638 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-05 00:50:43.354645 | orchestrator | Thursday 05 March 2026 00:50:08 +0000 (0:00:00.933) 0:00:59.712 ******** 2026-03-05 00:50:43.354652 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:43.354659 | orchestrator | 2026-03-05 00:50:43.354680 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:50:43.354687 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:50:43.354694 | orchestrator | 2026-03-05 00:50:43.354701 | orchestrator | 2026-03-05 00:50:43.354708 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:50:43.354716 | orchestrator | Thursday 05 March 2026 00:50:08 +0000 (0:00:00.500) 0:01:00.213 ******** 2026-03-05 00:50:43.354732 | orchestrator | =============================================================================== 2026-03-05 00:50:43.354739 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 38.43s 2026-03-05 00:50:43.354746 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.99s 2026-03-05 00:50:43.354753 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 4.63s 2026-03-05 00:50:43.354760 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 4.45s 2026-03-05 00:50:43.354767 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.08s 2026-03-05 00:50:43.354774 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.35s 2026-03-05 00:50:43.354781 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.27s 2026-03-05 00:50:43.354788 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.93s 2026-03-05 00:50:43.354795 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.80s 2026-03-05 00:50:43.354802 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.50s 2026-03-05 00:50:43.354809 | orchestrator | 2026-03-05 00:50:43.354817 | orchestrator | 2026-03-05 00:50:43.354824 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-05 00:50:43.354831 | orchestrator | 2026-03-05 00:50:43.354837 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-05 00:50:43.354844 | orchestrator | Thursday 05 March 2026 00:49:32 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-03-05 00:50:43.354850 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:43.354857 | orchestrator | 2026-03-05 00:50:43.354863 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-05 00:50:43.354870 | orchestrator | Thursday 05 March 2026 00:49:33 +0000 (0:00:01.459) 0:00:01.724 ******** 2026-03-05 00:50:43.354877 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-05 00:50:43.354883 | orchestrator | 2026-03-05 00:50:43.354931 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-05 00:50:43.354938 | orchestrator | Thursday 05 March 2026 00:49:34 +0000 (0:00:00.742) 0:00:02.466 ******** 2026-03-05 00:50:43.354944 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.354951 | orchestrator | 2026-03-05 00:50:43.354957 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-05 00:50:43.354964 | orchestrator | Thursday 05 March 2026 00:49:36 +0000 (0:00:01.418) 0:00:03.885 ******** 2026-03-05 00:50:43.354970 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-05 00:50:43.354977 | orchestrator | ok: [testbed-manager] 2026-03-05 00:50:43.354983 | orchestrator | 2026-03-05 00:50:43.354990 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-05 00:50:43.354996 | orchestrator | Thursday 05 March 2026 00:50:36 +0000 (0:01:00.021) 0:01:03.907 ******** 2026-03-05 00:50:43.355003 | orchestrator | changed: [testbed-manager] 2026-03-05 00:50:43.355009 | orchestrator | 2026-03-05 00:50:43.355016 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:50:43.355022 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:50:43.355029 | orchestrator | 2026-03-05 00:50:43.355035 | orchestrator | 2026-03-05 00:50:43.355042 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:50:43.355052 | orchestrator | Thursday 05 March 2026 00:50:40 +0000 (0:00:04.702) 0:01:08.609 ******** 2026-03-05 00:50:43.355059 | orchestrator | =============================================================================== 2026-03-05 00:50:43.355066 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.02s 2026-03-05 00:50:43.355076 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.70s 2026-03-05 00:50:43.355082 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.46s 2026-03-05 00:50:43.355093 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.42s 2026-03-05 00:50:43.355099 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.74s 2026-03-05 00:50:43.355302 | orchestrator | 2026-03-05 00:50:43 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:43.355938 | orchestrator | 2026-03-05 00:50:43 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:43.357992 | orchestrator | 2026-03-05 00:50:43 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:43.358084 | orchestrator | 2026-03-05 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:46.413227 | orchestrator | 2026-03-05 00:50:46 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:46.414244 | orchestrator | 2026-03-05 00:50:46 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:46.419457 | orchestrator | 2026-03-05 00:50:46 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:46.425892 | orchestrator | 2026-03-05 00:50:46 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:46.426180 | orchestrator | 2026-03-05 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:49.505867 | orchestrator | 2026-03-05 00:50:49 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:49.506603 | orchestrator | 2026-03-05 00:50:49 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:49.510728 | orchestrator | 2026-03-05 00:50:49 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:49.511436 | orchestrator | 2026-03-05 00:50:49 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:49.511462 | orchestrator | 2026-03-05 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:52.552422 | orchestrator | 2026-03-05 00:50:52 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:52.552528 | orchestrator | 2026-03-05 00:50:52 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:52.553256 | orchestrator | 2026-03-05 00:50:52 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:52.559898 | orchestrator | 2026-03-05 00:50:52 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:52.559973 | orchestrator | 2026-03-05 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:55.634260 | orchestrator | 2026-03-05 00:50:55 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:55.637101 | orchestrator | 2026-03-05 00:50:55 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:55.637568 | orchestrator | 2026-03-05 00:50:55 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:55.645244 | orchestrator | 2026-03-05 00:50:55 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:55.645308 | orchestrator | 2026-03-05 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:50:58.691222 | orchestrator | 2026-03-05 00:50:58 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state STARTED 2026-03-05 00:50:58.693421 | orchestrator | 2026-03-05 00:50:58 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:50:58.695187 | orchestrator | 2026-03-05 00:50:58 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:50:58.696830 | orchestrator | 2026-03-05 00:50:58 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:50:58.697215 | orchestrator | 2026-03-05 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:01.762263 | orchestrator | 2026-03-05 00:51:01.762343 | orchestrator | 2026-03-05 00:51:01.762350 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:51:01.762356 | orchestrator | 2026-03-05 00:51:01.762360 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:51:01.762365 | orchestrator | Thursday 05 March 2026 00:49:09 +0000 (0:00:01.026) 0:00:01.026 ******** 2026-03-05 00:51:01.762370 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-05 00:51:01.762374 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-05 00:51:01.762378 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-05 00:51:01.762394 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-05 00:51:01.762399 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-05 00:51:01.762402 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-05 00:51:01.762406 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-05 00:51:01.762410 | orchestrator | 2026-03-05 00:51:01.762414 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-05 00:51:01.762418 | orchestrator | 2026-03-05 00:51:01.762422 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-05 00:51:01.762425 | orchestrator | Thursday 05 March 2026 00:49:11 +0000 (0:00:01.760) 0:00:02.790 ******** 2026-03-05 00:51:01.762438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:51:01.762444 | orchestrator | 2026-03-05 00:51:01.762448 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-05 00:51:01.762452 | orchestrator | Thursday 05 March 2026 00:49:13 +0000 (0:00:02.085) 0:00:04.875 ******** 2026-03-05 00:51:01.762455 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:01.762461 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:51:01.762465 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:51:01.762468 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:51:01.762472 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:51:01.762476 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:01.762480 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:51:01.762483 | orchestrator | 2026-03-05 00:51:01.762487 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-05 00:51:01.762493 | orchestrator | Thursday 05 March 2026 00:49:16 +0000 (0:00:02.819) 0:00:07.694 ******** 2026-03-05 00:51:01.762499 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:01.762505 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:01.762511 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:51:01.762517 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:51:01.762524 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:51:01.762530 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:51:01.762536 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:51:01.762542 | orchestrator | 2026-03-05 00:51:01.762549 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-05 00:51:01.762554 | orchestrator | Thursday 05 March 2026 00:49:20 +0000 (0:00:04.833) 0:00:12.528 ******** 2026-03-05 00:51:01.762562 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:01.762566 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:01.762570 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:01.762575 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:01.762581 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:01.762587 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:01.762610 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:01.762617 | orchestrator | 2026-03-05 00:51:01.762624 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-05 00:51:01.762631 | orchestrator | Thursday 05 March 2026 00:49:24 +0000 (0:00:03.162) 0:00:15.691 ******** 2026-03-05 00:51:01.762637 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:01.762688 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:01.762695 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:01.762701 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:01.762708 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:01.762714 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:01.762721 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:01.762727 | orchestrator | 2026-03-05 00:51:01.762734 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-05 00:51:01.762740 | orchestrator | Thursday 05 March 2026 00:49:39 +0000 (0:00:15.831) 0:00:31.522 ******** 2026-03-05 00:51:01.762747 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:01.762753 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:01.762760 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:01.762766 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:01.762773 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:01.762779 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:01.762786 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:01.762792 | orchestrator | 2026-03-05 00:51:01.762799 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-05 00:51:01.762806 | orchestrator | Thursday 05 March 2026 00:50:29 +0000 (0:00:49.239) 0:01:20.762 ******** 2026-03-05 00:51:01.762814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:51:01.762822 | orchestrator | 2026-03-05 00:51:01.762829 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-05 00:51:01.762836 | orchestrator | Thursday 05 March 2026 00:50:31 +0000 (0:00:01.783) 0:01:22.546 ******** 2026-03-05 00:51:01.762842 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-05 00:51:01.762943 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-05 00:51:01.762954 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-05 00:51:01.762960 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-05 00:51:01.762981 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-05 00:51:01.762988 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-05 00:51:01.762995 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-05 00:51:01.763001 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-05 00:51:01.763008 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-05 00:51:01.763012 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-05 00:51:01.763016 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-05 00:51:01.763020 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-05 00:51:01.763029 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-05 00:51:01.763034 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-05 00:51:01.763040 | orchestrator | 2026-03-05 00:51:01.763046 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-05 00:51:01.763053 | orchestrator | Thursday 05 March 2026 00:50:37 +0000 (0:00:06.029) 0:01:28.575 ******** 2026-03-05 00:51:01.763059 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:01.763066 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:01.763072 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:51:01.763078 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:51:01.763085 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:51:01.763099 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:51:01.763106 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:51:01.763112 | orchestrator | 2026-03-05 00:51:01.763119 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-05 00:51:01.763125 | orchestrator | Thursday 05 March 2026 00:50:38 +0000 (0:00:01.565) 0:01:30.143 ******** 2026-03-05 00:51:01.763131 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:01.763138 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:01.763144 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:01.763150 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:01.763156 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:01.763163 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:01.763169 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:01.763175 | orchestrator | 2026-03-05 00:51:01.763180 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-05 00:51:01.763187 | orchestrator | Thursday 05 March 2026 00:50:41 +0000 (0:00:02.555) 0:01:32.698 ******** 2026-03-05 00:51:01.763193 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:01.763199 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:51:01.763205 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:51:01.763211 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:01.763218 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:51:01.763224 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:51:01.763230 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:51:01.763237 | orchestrator | 2026-03-05 00:51:01.763243 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-05 00:51:01.763249 | orchestrator | Thursday 05 March 2026 00:50:43 +0000 (0:00:02.061) 0:01:34.760 ******** 2026-03-05 00:51:01.763256 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:51:01.763262 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:01.763268 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:51:01.763274 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:51:01.763281 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:01.763285 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:51:01.763289 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:51:01.763293 | orchestrator | 2026-03-05 00:51:01.763297 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-05 00:51:01.763301 | orchestrator | Thursday 05 March 2026 00:50:45 +0000 (0:00:02.015) 0:01:36.775 ******** 2026-03-05 00:51:01.763305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-05 00:51:01.763312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:51:01.763317 | orchestrator | 2026-03-05 00:51:01.763320 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-05 00:51:01.763326 | orchestrator | Thursday 05 March 2026 00:50:46 +0000 (0:00:01.662) 0:01:38.437 ******** 2026-03-05 00:51:01.763332 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:01.763338 | orchestrator | 2026-03-05 00:51:01.763344 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-05 00:51:01.763350 | orchestrator | Thursday 05 March 2026 00:50:49 +0000 (0:00:02.192) 0:01:40.629 ******** 2026-03-05 00:51:01.763356 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:01.763363 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:01.763369 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:01.763375 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:01.763381 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:01.763388 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:01.763394 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:01.763400 | orchestrator | 2026-03-05 00:51:01.763406 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:51:01.763413 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:51:01.763426 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:51:01.763433 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:51:01.763439 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:51:01.763451 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:51:01.763457 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:51:01.763464 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:51:01.763469 | orchestrator | 2026-03-05 00:51:01.763557 | orchestrator | 2026-03-05 00:51:01.763563 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:51:01.763567 | orchestrator | Thursday 05 March 2026 00:51:00 +0000 (0:00:11.227) 0:01:51.857 ******** 2026-03-05 00:51:01.763571 | orchestrator | =============================================================================== 2026-03-05 00:51:01.763575 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 49.24s 2026-03-05 00:51:01.763579 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.83s 2026-03-05 00:51:01.763584 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.23s 2026-03-05 00:51:01.763589 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.03s 2026-03-05 00:51:01.763593 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.83s 2026-03-05 00:51:01.763599 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.16s 2026-03-05 00:51:01.763605 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.82s 2026-03-05 00:51:01.763619 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.56s 2026-03-05 00:51:01.763626 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.19s 2026-03-05 00:51:01.763633 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.09s 2026-03-05 00:51:01.763666 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.06s 2026-03-05 00:51:01.763674 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.02s 2026-03-05 00:51:01.763681 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.78s 2026-03-05 00:51:01.763688 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.76s 2026-03-05 00:51:01.763695 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.66s 2026-03-05 00:51:01.763702 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.57s 2026-03-05 00:51:01.763711 | orchestrator | 2026-03-05 00:51:01 | INFO  | Task c6aff7a8-5089-4e08-86d0-295a11436423 is in state SUCCESS 2026-03-05 00:51:01.763722 | orchestrator | 2026-03-05 00:51:01 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:01.768072 | orchestrator | 2026-03-05 00:51:01 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:01.777094 | orchestrator | 2026-03-05 00:51:01 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:01.777998 | orchestrator | 2026-03-05 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:04.838279 | orchestrator | 2026-03-05 00:51:04 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:04.838392 | orchestrator | 2026-03-05 00:51:04 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:04.839713 | orchestrator | 2026-03-05 00:51:04 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:04.839767 | orchestrator | 2026-03-05 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:07.882159 | orchestrator | 2026-03-05 00:51:07 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:07.882943 | orchestrator | 2026-03-05 00:51:07 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:07.885785 | orchestrator | 2026-03-05 00:51:07 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:07.885817 | orchestrator | 2026-03-05 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:10.932398 | orchestrator | 2026-03-05 00:51:10 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:10.933716 | orchestrator | 2026-03-05 00:51:10 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:10.933743 | orchestrator | 2026-03-05 00:51:10 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:10.933747 | orchestrator | 2026-03-05 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:13.984617 | orchestrator | 2026-03-05 00:51:13 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:13.987041 | orchestrator | 2026-03-05 00:51:13 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:13.990072 | orchestrator | 2026-03-05 00:51:13 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:13.990110 | orchestrator | 2026-03-05 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:17.031535 | orchestrator | 2026-03-05 00:51:17 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:17.032824 | orchestrator | 2026-03-05 00:51:17 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:17.036110 | orchestrator | 2026-03-05 00:51:17 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:17.036179 | orchestrator | 2026-03-05 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:20.074710 | orchestrator | 2026-03-05 00:51:20 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:20.074822 | orchestrator | 2026-03-05 00:51:20 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:20.075514 | orchestrator | 2026-03-05 00:51:20 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:20.075589 | orchestrator | 2026-03-05 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:23.120843 | orchestrator | 2026-03-05 00:51:23 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:23.120934 | orchestrator | 2026-03-05 00:51:23 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:23.122426 | orchestrator | 2026-03-05 00:51:23 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:23.122472 | orchestrator | 2026-03-05 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:26.181381 | orchestrator | 2026-03-05 00:51:26 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:26.181502 | orchestrator | 2026-03-05 00:51:26 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:26.182432 | orchestrator | 2026-03-05 00:51:26 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:26.182477 | orchestrator | 2026-03-05 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:29.233918 | orchestrator | 2026-03-05 00:51:29 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:29.235408 | orchestrator | 2026-03-05 00:51:29 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:29.236617 | orchestrator | 2026-03-05 00:51:29 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state STARTED 2026-03-05 00:51:29.236933 | orchestrator | 2026-03-05 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:32.277590 | orchestrator | 2026-03-05 00:51:32 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:32.278204 | orchestrator | 2026-03-05 00:51:32 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:32.283192 | orchestrator | 2026-03-05 00:51:32 | INFO  | Task 18d64148-3a4c-4e9a-8885-ba0cf30c1858 is in state SUCCESS 2026-03-05 00:51:32.286359 | orchestrator | 2026-03-05 00:51:32.286868 | orchestrator | 2026-03-05 00:51:32.286885 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-05 00:51:32.286893 | orchestrator | 2026-03-05 00:51:32.286900 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-05 00:51:32.286908 | orchestrator | Thursday 05 March 2026 00:49:00 +0000 (0:00:00.285) 0:00:00.285 ******** 2026-03-05 00:51:32.286917 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:51:32.286925 | orchestrator | 2026-03-05 00:51:32.286931 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-05 00:51:32.286938 | orchestrator | Thursday 05 March 2026 00:49:01 +0000 (0:00:01.273) 0:00:01.559 ******** 2026-03-05 00:51:32.286945 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:51:32.286952 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:51:32.286959 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:51:32.286965 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:51:32.286972 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:51:32.286979 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:51:32.286985 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:51:32.286992 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:51:32.286999 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:51:32.287006 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:51:32.287013 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:51:32.287020 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-05 00:51:32.287026 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:51:32.287038 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:51:32.287045 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:51:32.287066 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:51:32.287073 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-05 00:51:32.287080 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:51:32.287087 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:51:32.287093 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:51:32.287100 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-05 00:51:32.287107 | orchestrator | 2026-03-05 00:51:32.287114 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-05 00:51:32.287121 | orchestrator | Thursday 05 March 2026 00:49:05 +0000 (0:00:04.036) 0:00:05.596 ******** 2026-03-05 00:51:32.287128 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:51:32.287135 | orchestrator | 2026-03-05 00:51:32.287142 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-05 00:51:32.287149 | orchestrator | Thursday 05 March 2026 00:49:07 +0000 (0:00:01.403) 0:00:07.000 ******** 2026-03-05 00:51:32.287159 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.287170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.287223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.287232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.287239 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.287273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287285 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.287365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287391 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.287398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287412 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.287483 | orchestrator | 2026-03-05 00:51:32.287491 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-05 00:51:32.287499 | orchestrator | Thursday 05 March 2026 00:49:13 +0000 (0:00:06.515) 0:00:13.515 ******** 2026-03-05 00:51:32.287507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287515 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287523 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287531 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:51:32.287558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287663 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:32.287726 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:51:32.287735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287791 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:51:32.287798 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:51:32.287811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287832 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:51:32.287844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287868 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:51:32.287874 | orchestrator | 2026-03-05 00:51:32.287881 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-05 00:51:32.287887 | orchestrator | Thursday 05 March 2026 00:49:16 +0000 (0:00:02.632) 0:00:16.147 ******** 2026-03-05 00:51:32.287898 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287905 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287912 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287919 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:51:32.287925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287954 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:32.287960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.287983 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:51:32.287990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.287997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.288003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.288065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.288074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.288081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.288087 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:51:32.288094 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:51:32.288104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.288112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.288118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.288125 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:51:32.288132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-05 00:51:32.288146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.288153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.288160 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:51:32.288166 | orchestrator | 2026-03-05 00:51:32.288172 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-05 00:51:32.288179 | orchestrator | Thursday 05 March 2026 00:49:21 +0000 (0:00:05.242) 0:00:21.390 ******** 2026-03-05 00:51:32.288185 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:51:32.288191 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:32.288198 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:51:32.288204 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:51:32.288211 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:51:32.288217 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:51:32.288224 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:51:32.288230 | orchestrator | 2026-03-05 00:51:32.288236 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-05 00:51:32.288243 | orchestrator | Thursday 05 March 2026 00:49:23 +0000 (0:00:01.871) 0:00:23.262 ******** 2026-03-05 00:51:32.288249 | orchestrator | skipping: [testbed-manager] 2026-03-05 00:51:32.288255 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:51:32.288262 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:51:32.288268 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:51:32.288274 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:51:32.288281 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:51:32.288287 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:51:32.288293 | orchestrator | 2026-03-05 00:51:32.288300 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-05 00:51:32.288306 | orchestrator | Thursday 05 March 2026 00:49:26 +0000 (0:00:02.486) 0:00:25.748 ******** 2026-03-05 00:51:32.288318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.288325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.288336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.288344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288365 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.288373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.288380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.288398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.288410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288442 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288499 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.288512 | orchestrator | 2026-03-05 00:51:32.288519 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-05 00:51:32.288525 | orchestrator | Thursday 05 March 2026 00:49:34 +0000 (0:00:08.497) 0:00:34.246 ******** 2026-03-05 00:51:32.288532 | orchestrator | [WARNING]: Skipped 2026-03-05 00:51:32.288539 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-05 00:51:32.288546 | orchestrator | to this access issue: 2026-03-05 00:51:32.288552 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-05 00:51:32.288558 | orchestrator | directory 2026-03-05 00:51:32.288565 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:51:32.288571 | orchestrator | 2026-03-05 00:51:32.288578 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-05 00:51:32.288584 | orchestrator | Thursday 05 March 2026 00:49:36 +0000 (0:00:01.935) 0:00:36.182 ******** 2026-03-05 00:51:32.288590 | orchestrator | [WARNING]: Skipped 2026-03-05 00:51:32.288627 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-05 00:51:32.288640 | orchestrator | to this access issue: 2026-03-05 00:51:32.288647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-05 00:51:32.288653 | orchestrator | directory 2026-03-05 00:51:32.288660 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:51:32.288672 | orchestrator | 2026-03-05 00:51:32.288679 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-05 00:51:32.288685 | orchestrator | Thursday 05 March 2026 00:49:37 +0000 (0:00:01.465) 0:00:37.648 ******** 2026-03-05 00:51:32.288691 | orchestrator | [WARNING]: Skipped 2026-03-05 00:51:32.288701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-05 00:51:32.288708 | orchestrator | to this access issue: 2026-03-05 00:51:32.288714 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-05 00:51:32.288721 | orchestrator | directory 2026-03-05 00:51:32.288727 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:51:32.288733 | orchestrator | 2026-03-05 00:51:32.288739 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-05 00:51:32.288746 | orchestrator | Thursday 05 March 2026 00:49:39 +0000 (0:00:01.714) 0:00:39.362 ******** 2026-03-05 00:51:32.288752 | orchestrator | [WARNING]: Skipped 2026-03-05 00:51:32.288758 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-05 00:51:32.288765 | orchestrator | to this access issue: 2026-03-05 00:51:32.288772 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-05 00:51:32.288778 | orchestrator | directory 2026-03-05 00:51:32.288784 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 00:51:32.288791 | orchestrator | 2026-03-05 00:51:32.288797 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-05 00:51:32.288803 | orchestrator | Thursday 05 March 2026 00:49:41 +0000 (0:00:02.262) 0:00:41.624 ******** 2026-03-05 00:51:32.288810 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:32.288816 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:32.288822 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:32.288828 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:32.288834 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:32.288841 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:32.288847 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:32.288853 | orchestrator | 2026-03-05 00:51:32.288859 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-05 00:51:32.288865 | orchestrator | Thursday 05 March 2026 00:49:49 +0000 (0:00:07.968) 0:00:49.593 ******** 2026-03-05 00:51:32.288872 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:51:32.288878 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:51:32.288885 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:51:32.288891 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:51:32.288897 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:51:32.288904 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:51:32.288910 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-05 00:51:32.288916 | orchestrator | 2026-03-05 00:51:32.288922 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-05 00:51:32.288929 | orchestrator | Thursday 05 March 2026 00:49:55 +0000 (0:00:05.490) 0:00:55.083 ******** 2026-03-05 00:51:32.288935 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:32.288942 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:32.288948 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:32.288954 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:32.288964 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:32.288971 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:32.288982 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:32.288988 | orchestrator | 2026-03-05 00:51:32.288995 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-05 00:51:32.289001 | orchestrator | Thursday 05 March 2026 00:49:59 +0000 (0:00:04.484) 0:00:59.568 ******** 2026-03-05 00:51:32.289008 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289015 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.289021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.289040 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289055 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.289077 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289085 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.289102 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.289115 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289122 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289139 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289146 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.289160 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:51:32.289177 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289183 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289190 | orchestrator | 2026-03-05 00:51:32.289196 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-05 00:51:32.289203 | orchestrator | Thursday 05 March 2026 00:50:04 +0000 (0:00:04.483) 0:01:04.051 ******** 2026-03-05 00:51:32.289214 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:32.289221 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:32.289227 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:32.289234 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:32.289240 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:32.289246 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:32.289253 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-05 00:51:32.289259 | orchestrator | 2026-03-05 00:51:32.289268 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-05 00:51:32.289275 | orchestrator | Thursday 05 March 2026 00:50:09 +0000 (0:00:04.827) 0:01:08.879 ******** 2026-03-05 00:51:32.289281 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:51:32.289288 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:51:32.289294 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:51:32.289300 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:51:32.289306 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:51:32.289313 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:51:32.289319 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-05 00:51:32.289325 | orchestrator | 2026-03-05 00:51:32.289332 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-05 00:51:32.289338 | orchestrator | Thursday 05 March 2026 00:50:12 +0000 (0:00:03.181) 0:01:12.060 ******** 2026-03-05 00:51:32.289344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289363 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289391 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289433 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-05 00:51:32.289440 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289486 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:51:32.289517 | orchestrator | 2026-03-05 00:51:32.289523 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-05 00:51:32.289529 | orchestrator | Thursday 05 March 2026 00:50:16 +0000 (0:00:03.807) 0:01:15.868 ******** 2026-03-05 00:51:32.289539 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:32.289546 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:32.289552 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:32.289559 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:32.289565 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:32.289572 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:32.289578 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:32.289584 | orchestrator | 2026-03-05 00:51:32.289603 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-05 00:51:32.289610 | orchestrator | Thursday 05 March 2026 00:50:17 +0000 (0:00:01.431) 0:01:17.299 ******** 2026-03-05 00:51:32.289616 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:32.289623 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:32.289629 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:32.289635 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:32.289642 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:32.289648 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:32.289655 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:32.289661 | orchestrator | 2026-03-05 00:51:32.289667 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:51:32.289674 | orchestrator | Thursday 05 March 2026 00:50:18 +0000 (0:00:01.169) 0:01:18.469 ******** 2026-03-05 00:51:32.289680 | orchestrator | 2026-03-05 00:51:32.289686 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:51:32.289693 | orchestrator | Thursday 05 March 2026 00:50:18 +0000 (0:00:00.071) 0:01:18.541 ******** 2026-03-05 00:51:32.289699 | orchestrator | 2026-03-05 00:51:32.289706 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:51:32.289712 | orchestrator | Thursday 05 March 2026 00:50:18 +0000 (0:00:00.066) 0:01:18.607 ******** 2026-03-05 00:51:32.289718 | orchestrator | 2026-03-05 00:51:32.289725 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:51:32.289735 | orchestrator | Thursday 05 March 2026 00:50:19 +0000 (0:00:00.247) 0:01:18.855 ******** 2026-03-05 00:51:32.289742 | orchestrator | 2026-03-05 00:51:32.289748 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:51:32.289755 | orchestrator | Thursday 05 March 2026 00:50:19 +0000 (0:00:00.067) 0:01:18.922 ******** 2026-03-05 00:51:32.289761 | orchestrator | 2026-03-05 00:51:32.289767 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:51:32.289774 | orchestrator | Thursday 05 March 2026 00:50:19 +0000 (0:00:00.065) 0:01:18.988 ******** 2026-03-05 00:51:32.289780 | orchestrator | 2026-03-05 00:51:32.289790 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-05 00:51:32.289796 | orchestrator | Thursday 05 March 2026 00:50:19 +0000 (0:00:00.064) 0:01:19.052 ******** 2026-03-05 00:51:32.289803 | orchestrator | 2026-03-05 00:51:32.289809 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-05 00:51:32.289815 | orchestrator | Thursday 05 March 2026 00:50:19 +0000 (0:00:00.085) 0:01:19.138 ******** 2026-03-05 00:51:32.289822 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:32.289828 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:32.289834 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:32.289841 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:32.289847 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:32.289853 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:32.289860 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:32.289866 | orchestrator | 2026-03-05 00:51:32.289872 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-05 00:51:32.289879 | orchestrator | Thursday 05 March 2026 00:50:51 +0000 (0:00:32.193) 0:01:51.332 ******** 2026-03-05 00:51:32.289885 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:32.289892 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:32.289898 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:32.289904 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:32.289911 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:32.289917 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:32.289923 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:32.289929 | orchestrator | 2026-03-05 00:51:32.289936 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-05 00:51:32.289942 | orchestrator | Thursday 05 March 2026 00:51:19 +0000 (0:00:28.103) 0:02:19.435 ******** 2026-03-05 00:51:32.289949 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:51:32.289955 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:51:32.289962 | orchestrator | ok: [testbed-manager] 2026-03-05 00:51:32.289968 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:51:32.289975 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:51:32.289981 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:51:32.289987 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:51:32.289993 | orchestrator | 2026-03-05 00:51:32.290000 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-05 00:51:32.290006 | orchestrator | Thursday 05 March 2026 00:51:22 +0000 (0:00:02.337) 0:02:21.773 ******** 2026-03-05 00:51:32.290039 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:51:32.290047 | orchestrator | changed: [testbed-manager] 2026-03-05 00:51:32.290053 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:51:32.290060 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:51:32.290066 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:51:32.290072 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:51:32.290078 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:51:32.290085 | orchestrator | 2026-03-05 00:51:32.290091 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:51:32.290098 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:51:32.290105 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:51:32.290117 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:51:32.290128 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:51:32.290135 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:51:32.290141 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:51:32.290148 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-05 00:51:32.290154 | orchestrator | 2026-03-05 00:51:32.290160 | orchestrator | 2026-03-05 00:51:32.290167 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:51:32.290173 | orchestrator | Thursday 05 March 2026 00:51:31 +0000 (0:00:09.370) 0:02:31.143 ******** 2026-03-05 00:51:32.290180 | orchestrator | =============================================================================== 2026-03-05 00:51:32.290186 | orchestrator | common : Restart fluentd container ------------------------------------- 32.19s 2026-03-05 00:51:32.290192 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.10s 2026-03-05 00:51:32.290198 | orchestrator | common : Restart cron container ----------------------------------------- 9.37s 2026-03-05 00:51:32.290205 | orchestrator | common : Copying over config.json files for services -------------------- 8.50s 2026-03-05 00:51:32.290211 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 7.97s 2026-03-05 00:51:32.290217 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.52s 2026-03-05 00:51:32.290223 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.49s 2026-03-05 00:51:32.290230 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 5.24s 2026-03-05 00:51:32.290236 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.83s 2026-03-05 00:51:32.290243 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.48s 2026-03-05 00:51:32.290249 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.48s 2026-03-05 00:51:32.290259 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.04s 2026-03-05 00:51:32.290265 | orchestrator | common : Check common containers ---------------------------------------- 3.81s 2026-03-05 00:51:32.290272 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.18s 2026-03-05 00:51:32.290279 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.63s 2026-03-05 00:51:32.290285 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.49s 2026-03-05 00:51:32.290291 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.34s 2026-03-05 00:51:32.290298 | orchestrator | common : Find custom fluentd output config files ------------------------ 2.26s 2026-03-05 00:51:32.290304 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.94s 2026-03-05 00:51:32.290310 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.87s 2026-03-05 00:51:32.290317 | orchestrator | 2026-03-05 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:35.324366 | orchestrator | 2026-03-05 00:51:35 | INFO  | Task d9bc020b-de7f-420e-8a25-b217647ed68f is in state STARTED 2026-03-05 00:51:35.325213 | orchestrator | 2026-03-05 00:51:35 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state STARTED 2026-03-05 00:51:35.325899 | orchestrator | 2026-03-05 00:51:35 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:35.328680 | orchestrator | 2026-03-05 00:51:35 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:51:35.329226 | orchestrator | 2026-03-05 00:51:35 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:35.330211 | orchestrator | 2026-03-05 00:51:35 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:51:35.330264 | orchestrator | 2026-03-05 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:38.355289 | orchestrator | 2026-03-05 00:51:38 | INFO  | Task d9bc020b-de7f-420e-8a25-b217647ed68f is in state STARTED 2026-03-05 00:51:38.356485 | orchestrator | 2026-03-05 00:51:38 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state STARTED 2026-03-05 00:51:38.357329 | orchestrator | 2026-03-05 00:51:38 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:38.357899 | orchestrator | 2026-03-05 00:51:38 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:51:38.358533 | orchestrator | 2026-03-05 00:51:38 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:38.359039 | orchestrator | 2026-03-05 00:51:38 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:51:38.359057 | orchestrator | 2026-03-05 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:41.390240 | orchestrator | 2026-03-05 00:51:41 | INFO  | Task d9bc020b-de7f-420e-8a25-b217647ed68f is in state STARTED 2026-03-05 00:51:41.390346 | orchestrator | 2026-03-05 00:51:41 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state STARTED 2026-03-05 00:51:41.390352 | orchestrator | 2026-03-05 00:51:41 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:41.391243 | orchestrator | 2026-03-05 00:51:41 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:51:41.391849 | orchestrator | 2026-03-05 00:51:41 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:41.393051 | orchestrator | 2026-03-05 00:51:41 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:51:41.393092 | orchestrator | 2026-03-05 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:44.424139 | orchestrator | 2026-03-05 00:51:44 | INFO  | Task d9bc020b-de7f-420e-8a25-b217647ed68f is in state STARTED 2026-03-05 00:51:44.424702 | orchestrator | 2026-03-05 00:51:44 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state STARTED 2026-03-05 00:51:44.425868 | orchestrator | 2026-03-05 00:51:44 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:44.426973 | orchestrator | 2026-03-05 00:51:44 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:51:44.428038 | orchestrator | 2026-03-05 00:51:44 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:44.429027 | orchestrator | 2026-03-05 00:51:44 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:51:44.429055 | orchestrator | 2026-03-05 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:47.496746 | orchestrator | 2026-03-05 00:51:47 | INFO  | Task d9bc020b-de7f-420e-8a25-b217647ed68f is in state STARTED 2026-03-05 00:51:47.518669 | orchestrator | 2026-03-05 00:51:47 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state STARTED 2026-03-05 00:51:47.518761 | orchestrator | 2026-03-05 00:51:47 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:47.518798 | orchestrator | 2026-03-05 00:51:47 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:51:47.518807 | orchestrator | 2026-03-05 00:51:47 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:47.520868 | orchestrator | 2026-03-05 00:51:47 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:51:47.520936 | orchestrator | 2026-03-05 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:50.552876 | orchestrator | 2026-03-05 00:51:50 | INFO  | Task d9bc020b-de7f-420e-8a25-b217647ed68f is in state STARTED 2026-03-05 00:51:50.555977 | orchestrator | 2026-03-05 00:51:50 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state STARTED 2026-03-05 00:51:50.557270 | orchestrator | 2026-03-05 00:51:50 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:50.559106 | orchestrator | 2026-03-05 00:51:50 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:51:50.560695 | orchestrator | 2026-03-05 00:51:50 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:50.562076 | orchestrator | 2026-03-05 00:51:50 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:51:50.562115 | orchestrator | 2026-03-05 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:53.596345 | orchestrator | 2026-03-05 00:51:53 | INFO  | Task d9bc020b-de7f-420e-8a25-b217647ed68f is in state SUCCESS 2026-03-05 00:51:53.600289 | orchestrator | 2026-03-05 00:51:53 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state STARTED 2026-03-05 00:51:53.600674 | orchestrator | 2026-03-05 00:51:53 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:53.601367 | orchestrator | 2026-03-05 00:51:53 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:51:53.606054 | orchestrator | 2026-03-05 00:51:53 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:53.607035 | orchestrator | 2026-03-05 00:51:53 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:51:53.610076 | orchestrator | 2026-03-05 00:51:53 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:51:53.610158 | orchestrator | 2026-03-05 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:56.649569 | orchestrator | 2026-03-05 00:51:56 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state STARTED 2026-03-05 00:51:56.650499 | orchestrator | 2026-03-05 00:51:56 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:56.651259 | orchestrator | 2026-03-05 00:51:56 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:51:56.652456 | orchestrator | 2026-03-05 00:51:56 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:56.652861 | orchestrator | 2026-03-05 00:51:56 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:51:56.654353 | orchestrator | 2026-03-05 00:51:56 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:51:56.654390 | orchestrator | 2026-03-05 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:51:59.688465 | orchestrator | 2026-03-05 00:51:59 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state STARTED 2026-03-05 00:51:59.688821 | orchestrator | 2026-03-05 00:51:59 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:51:59.689875 | orchestrator | 2026-03-05 00:51:59 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:51:59.691151 | orchestrator | 2026-03-05 00:51:59 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:51:59.692328 | orchestrator | 2026-03-05 00:51:59 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:51:59.693941 | orchestrator | 2026-03-05 00:51:59 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:51:59.694076 | orchestrator | 2026-03-05 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:02.800354 | orchestrator | 2026-03-05 00:52:02.800443 | orchestrator | 2026-03-05 00:52:02.800459 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:52:02.800470 | orchestrator | 2026-03-05 00:52:02.800491 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:52:02.800501 | orchestrator | Thursday 05 March 2026 00:51:37 +0000 (0:00:00.314) 0:00:00.314 ******** 2026-03-05 00:52:02.800512 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:02.800523 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:02.800533 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:02.800543 | orchestrator | 2026-03-05 00:52:02.800553 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:52:02.800563 | orchestrator | Thursday 05 March 2026 00:51:37 +0000 (0:00:00.289) 0:00:00.603 ******** 2026-03-05 00:52:02.800573 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-05 00:52:02.800583 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-05 00:52:02.800593 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-05 00:52:02.800603 | orchestrator | 2026-03-05 00:52:02.800613 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-05 00:52:02.800652 | orchestrator | 2026-03-05 00:52:02.800664 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-05 00:52:02.800675 | orchestrator | Thursday 05 March 2026 00:51:37 +0000 (0:00:00.423) 0:00:01.027 ******** 2026-03-05 00:52:02.800685 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:52:02.800696 | orchestrator | 2026-03-05 00:52:02.800706 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-05 00:52:02.800716 | orchestrator | Thursday 05 March 2026 00:51:38 +0000 (0:00:00.514) 0:00:01.542 ******** 2026-03-05 00:52:02.800726 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-05 00:52:02.800736 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-05 00:52:02.800747 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-05 00:52:02.800757 | orchestrator | 2026-03-05 00:52:02.800766 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-05 00:52:02.800776 | orchestrator | Thursday 05 March 2026 00:51:39 +0000 (0:00:00.818) 0:00:02.360 ******** 2026-03-05 00:52:02.800786 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-05 00:52:02.800796 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-05 00:52:02.800806 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-05 00:52:02.800816 | orchestrator | 2026-03-05 00:52:02.800826 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-05 00:52:02.800835 | orchestrator | Thursday 05 March 2026 00:51:41 +0000 (0:00:01.914) 0:00:04.275 ******** 2026-03-05 00:52:02.800845 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:02.800855 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:02.800865 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:02.800877 | orchestrator | 2026-03-05 00:52:02.800889 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-05 00:52:02.800900 | orchestrator | Thursday 05 March 2026 00:51:42 +0000 (0:00:01.691) 0:00:05.966 ******** 2026-03-05 00:52:02.800929 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:02.800941 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:02.800954 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:02.800971 | orchestrator | 2026-03-05 00:52:02.800988 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:52:02.801004 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:52:02.801022 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:52:02.801039 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:52:02.801054 | orchestrator | 2026-03-05 00:52:02.801071 | orchestrator | 2026-03-05 00:52:02.801086 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:52:02.801103 | orchestrator | Thursday 05 March 2026 00:51:50 +0000 (0:00:08.273) 0:00:14.240 ******** 2026-03-05 00:52:02.801119 | orchestrator | =============================================================================== 2026-03-05 00:52:02.801135 | orchestrator | memcached : Restart memcached container --------------------------------- 8.27s 2026-03-05 00:52:02.801150 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.91s 2026-03-05 00:52:02.801168 | orchestrator | memcached : Check memcached container ----------------------------------- 1.69s 2026-03-05 00:52:02.801184 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.82s 2026-03-05 00:52:02.801201 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.51s 2026-03-05 00:52:02.801219 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-05 00:52:02.801234 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-05 00:52:02.801246 | orchestrator | 2026-03-05 00:52:02.801258 | orchestrator | 2026-03-05 00:52:02.801269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:52:02.801279 | orchestrator | 2026-03-05 00:52:02.801289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:52:02.801299 | orchestrator | Thursday 05 March 2026 00:51:37 +0000 (0:00:00.362) 0:00:00.362 ******** 2026-03-05 00:52:02.801309 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:02.801319 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:02.801329 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:02.801339 | orchestrator | 2026-03-05 00:52:02.801355 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:52:02.801384 | orchestrator | Thursday 05 March 2026 00:51:37 +0000 (0:00:00.424) 0:00:00.787 ******** 2026-03-05 00:52:02.801394 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-05 00:52:02.801404 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-05 00:52:02.801414 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-05 00:52:02.801424 | orchestrator | 2026-03-05 00:52:02.801433 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-05 00:52:02.801443 | orchestrator | 2026-03-05 00:52:02.801453 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-05 00:52:02.801463 | orchestrator | Thursday 05 March 2026 00:51:38 +0000 (0:00:00.507) 0:00:01.295 ******** 2026-03-05 00:52:02.801473 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:52:02.801483 | orchestrator | 2026-03-05 00:52:02.801493 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-05 00:52:02.801502 | orchestrator | Thursday 05 March 2026 00:51:38 +0000 (0:00:00.538) 0:00:01.833 ******** 2026-03-05 00:52:02.801515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801609 | orchestrator | 2026-03-05 00:52:02.801619 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-05 00:52:02.801651 | orchestrator | Thursday 05 March 2026 00:51:39 +0000 (0:00:01.129) 0:00:02.963 ******** 2026-03-05 00:52:02.801668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801744 | orchestrator | 2026-03-05 00:52:02.801760 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-05 00:52:02.801770 | orchestrator | Thursday 05 March 2026 00:51:42 +0000 (0:00:02.548) 0:00:05.511 ******** 2026-03-05 00:52:02.801780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801897 | orchestrator | 2026-03-05 00:52:02.801908 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-05 00:52:02.801926 | orchestrator | Thursday 05 March 2026 00:51:45 +0000 (0:00:02.994) 0:00:08.506 ******** 2026-03-05 00:52:02.801943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.801997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.802096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.802127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-05 00:52:02.802147 | orchestrator | 2026-03-05 00:52:02.802157 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-05 00:52:02.802167 | orchestrator | Thursday 05 March 2026 00:51:47 +0000 (0:00:02.082) 0:00:10.588 ******** 2026-03-05 00:52:02.802194 | orchestrator | 2026-03-05 00:52:02.802218 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-05 00:52:02.802235 | orchestrator | Thursday 05 March 2026 00:51:47 +0000 (0:00:00.087) 0:00:10.675 ******** 2026-03-05 00:52:02.802251 | orchestrator | 2026-03-05 00:52:02.802267 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-05 00:52:02.802283 | orchestrator | Thursday 05 March 2026 00:51:47 +0000 (0:00:00.100) 0:00:10.776 ******** 2026-03-05 00:52:02.802299 | orchestrator | 2026-03-05 00:52:02.802316 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-05 00:52:02.802334 | orchestrator | Thursday 05 March 2026 00:51:47 +0000 (0:00:00.090) 0:00:10.867 ******** 2026-03-05 00:52:02.802351 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:02.802369 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:02.802385 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:02.802399 | orchestrator | 2026-03-05 00:52:02.802410 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-05 00:52:02.802419 | orchestrator | Thursday 05 March 2026 00:51:51 +0000 (0:00:04.271) 0:00:15.139 ******** 2026-03-05 00:52:02.802429 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:02.802439 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:02.802449 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:02.802459 | orchestrator | 2026-03-05 00:52:02.802469 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:52:02.802479 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:52:02.802490 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:52:02.802500 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:52:02.802510 | orchestrator | 2026-03-05 00:52:02.802520 | orchestrator | 2026-03-05 00:52:02.802529 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:52:02.802539 | orchestrator | Thursday 05 March 2026 00:52:01 +0000 (0:00:09.135) 0:00:24.274 ******** 2026-03-05 00:52:02.802549 | orchestrator | =============================================================================== 2026-03-05 00:52:02.802559 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.14s 2026-03-05 00:52:02.802568 | orchestrator | redis : Restart redis container ----------------------------------------- 4.27s 2026-03-05 00:52:02.802578 | orchestrator | redis : Copying over redis config files --------------------------------- 2.99s 2026-03-05 00:52:02.802588 | orchestrator | redis : Copying over default config.json files -------------------------- 2.55s 2026-03-05 00:52:02.802597 | orchestrator | redis : Check redis containers ------------------------------------------ 2.08s 2026-03-05 00:52:02.802607 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.13s 2026-03-05 00:52:02.802617 | orchestrator | redis : include_tasks --------------------------------------------------- 0.54s 2026-03-05 00:52:02.802690 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-03-05 00:52:02.802701 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-03-05 00:52:02.802711 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.28s 2026-03-05 00:52:02.802721 | orchestrator | 2026-03-05 00:52:02 | INFO  | Task 6ba19b88-0fd2-4130-8d8d-d0eafcabbce8 is in state SUCCESS 2026-03-05 00:52:02.802740 | orchestrator | 2026-03-05 00:52:02 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:02.802749 | orchestrator | 2026-03-05 00:52:02 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:02.802767 | orchestrator | 2026-03-05 00:52:02 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:02.802784 | orchestrator | 2026-03-05 00:52:02 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:02.802801 | orchestrator | 2026-03-05 00:52:02 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:02.802818 | orchestrator | 2026-03-05 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:05.807169 | orchestrator | 2026-03-05 00:52:05 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:05.808852 | orchestrator | 2026-03-05 00:52:05 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:05.811149 | orchestrator | 2026-03-05 00:52:05 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:05.814335 | orchestrator | 2026-03-05 00:52:05 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:05.817366 | orchestrator | 2026-03-05 00:52:05 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:05.817449 | orchestrator | 2026-03-05 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:08.865764 | orchestrator | 2026-03-05 00:52:08 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:08.865868 | orchestrator | 2026-03-05 00:52:08 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:08.866320 | orchestrator | 2026-03-05 00:52:08 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:08.867030 | orchestrator | 2026-03-05 00:52:08 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:08.867880 | orchestrator | 2026-03-05 00:52:08 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:08.867925 | orchestrator | 2026-03-05 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:12.095390 | orchestrator | 2026-03-05 00:52:11 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:12.095496 | orchestrator | 2026-03-05 00:52:11 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:12.095512 | orchestrator | 2026-03-05 00:52:11 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:12.095524 | orchestrator | 2026-03-05 00:52:11 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:12.095536 | orchestrator | 2026-03-05 00:52:11 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:12.095548 | orchestrator | 2026-03-05 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:14.957713 | orchestrator | 2026-03-05 00:52:14 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:14.958138 | orchestrator | 2026-03-05 00:52:14 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:14.961093 | orchestrator | 2026-03-05 00:52:14 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:14.961249 | orchestrator | 2026-03-05 00:52:14 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:14.962701 | orchestrator | 2026-03-05 00:52:14 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:14.962770 | orchestrator | 2026-03-05 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:18.003308 | orchestrator | 2026-03-05 00:52:18 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:18.063583 | orchestrator | 2026-03-05 00:52:18 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:18.063742 | orchestrator | 2026-03-05 00:52:18 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:18.063756 | orchestrator | 2026-03-05 00:52:18 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:18.063765 | orchestrator | 2026-03-05 00:52:18 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:18.063774 | orchestrator | 2026-03-05 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:21.055765 | orchestrator | 2026-03-05 00:52:21 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:21.055876 | orchestrator | 2026-03-05 00:52:21 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:21.057189 | orchestrator | 2026-03-05 00:52:21 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:21.057230 | orchestrator | 2026-03-05 00:52:21 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:21.057360 | orchestrator | 2026-03-05 00:52:21 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:21.057515 | orchestrator | 2026-03-05 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:24.128965 | orchestrator | 2026-03-05 00:52:24 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:24.129844 | orchestrator | 2026-03-05 00:52:24 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:24.129892 | orchestrator | 2026-03-05 00:52:24 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:24.130428 | orchestrator | 2026-03-05 00:52:24 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:24.131561 | orchestrator | 2026-03-05 00:52:24 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:24.131603 | orchestrator | 2026-03-05 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:27.172350 | orchestrator | 2026-03-05 00:52:27 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:27.172467 | orchestrator | 2026-03-05 00:52:27 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:27.172665 | orchestrator | 2026-03-05 00:52:27 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:27.173589 | orchestrator | 2026-03-05 00:52:27 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:27.174298 | orchestrator | 2026-03-05 00:52:27 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:27.174327 | orchestrator | 2026-03-05 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:30.256793 | orchestrator | 2026-03-05 00:52:30 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:30.258516 | orchestrator | 2026-03-05 00:52:30 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:30.263682 | orchestrator | 2026-03-05 00:52:30 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:30.266635 | orchestrator | 2026-03-05 00:52:30 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:30.266680 | orchestrator | 2026-03-05 00:52:30 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:30.266724 | orchestrator | 2026-03-05 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:33.291957 | orchestrator | 2026-03-05 00:52:33 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:33.292107 | orchestrator | 2026-03-05 00:52:33 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:33.292475 | orchestrator | 2026-03-05 00:52:33 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:33.292987 | orchestrator | 2026-03-05 00:52:33 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:33.293472 | orchestrator | 2026-03-05 00:52:33 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:33.293660 | orchestrator | 2026-03-05 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:36.318958 | orchestrator | 2026-03-05 00:52:36 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:36.319058 | orchestrator | 2026-03-05 00:52:36 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state STARTED 2026-03-05 00:52:36.319866 | orchestrator | 2026-03-05 00:52:36 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:36.320257 | orchestrator | 2026-03-05 00:52:36 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:36.322542 | orchestrator | 2026-03-05 00:52:36 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:36.322594 | orchestrator | 2026-03-05 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:39.349462 | orchestrator | 2026-03-05 00:52:39 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:39.349538 | orchestrator | 2026-03-05 00:52:39 | INFO  | Task 3dfe97c3-06a8-4643-a177-b94b099f7aac is in state SUCCESS 2026-03-05 00:52:39.350403 | orchestrator | 2026-03-05 00:52:39.350443 | orchestrator | 2026-03-05 00:52:39.350452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:52:39.350460 | orchestrator | 2026-03-05 00:52:39.350549 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:52:39.350555 | orchestrator | Thursday 05 March 2026 00:51:36 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-03-05 00:52:39.350559 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:39.350565 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:39.350568 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:39.350572 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:52:39.350576 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:52:39.350580 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:52:39.350584 | orchestrator | 2026-03-05 00:52:39.350588 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:52:39.350592 | orchestrator | Thursday 05 March 2026 00:51:37 +0000 (0:00:00.647) 0:00:00.902 ******** 2026-03-05 00:52:39.350596 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:52:39.350614 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:52:39.350618 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:52:39.350621 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:52:39.350642 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:52:39.350646 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-05 00:52:39.350650 | orchestrator | 2026-03-05 00:52:39.350654 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-05 00:52:39.350657 | orchestrator | 2026-03-05 00:52:39.350661 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-05 00:52:39.350665 | orchestrator | Thursday 05 March 2026 00:51:38 +0000 (0:00:00.625) 0:00:01.528 ******** 2026-03-05 00:52:39.350670 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:52:39.350763 | orchestrator | 2026-03-05 00:52:39.350769 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-05 00:52:39.350773 | orchestrator | Thursday 05 March 2026 00:51:39 +0000 (0:00:01.158) 0:00:02.686 ******** 2026-03-05 00:52:39.350777 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-05 00:52:39.350781 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-05 00:52:39.350785 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-05 00:52:39.350789 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-05 00:52:39.350793 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-05 00:52:39.350797 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-05 00:52:39.350801 | orchestrator | 2026-03-05 00:52:39.350804 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-05 00:52:39.350808 | orchestrator | Thursday 05 March 2026 00:51:40 +0000 (0:00:01.310) 0:00:03.996 ******** 2026-03-05 00:52:39.350812 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-05 00:52:39.350816 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-05 00:52:39.350820 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-05 00:52:39.350824 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-05 00:52:39.350828 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-05 00:52:39.350832 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-05 00:52:39.350835 | orchestrator | 2026-03-05 00:52:39.350839 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-05 00:52:39.350843 | orchestrator | Thursday 05 March 2026 00:51:42 +0000 (0:00:01.578) 0:00:05.575 ******** 2026-03-05 00:52:39.350847 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-05 00:52:39.350851 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:39.350856 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-05 00:52:39.350859 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:39.350863 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-05 00:52:39.350867 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:39.350871 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-05 00:52:39.350874 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:52:39.350878 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-05 00:52:39.350882 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:52:39.350889 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-05 00:52:39.350895 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:52:39.350901 | orchestrator | 2026-03-05 00:52:39.350907 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-05 00:52:39.350913 | orchestrator | Thursday 05 March 2026 00:51:43 +0000 (0:00:01.323) 0:00:06.898 ******** 2026-03-05 00:52:39.350919 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:39.350925 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:39.350931 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:39.350944 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:52:39.350950 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:52:39.350957 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:52:39.350962 | orchestrator | 2026-03-05 00:52:39.350968 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-05 00:52:39.350975 | orchestrator | Thursday 05 March 2026 00:51:44 +0000 (0:00:00.820) 0:00:07.719 ******** 2026-03-05 00:52:39.350999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351064 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351092 | orchestrator | 2026-03-05 00:52:39.351096 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-05 00:52:39.351100 | orchestrator | Thursday 05 March 2026 00:51:46 +0000 (0:00:01.884) 0:00:09.604 ******** 2026-03-05 00:52:39.351107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351174 | orchestrator | 2026-03-05 00:52:39.351177 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-05 00:52:39.351181 | orchestrator | Thursday 05 March 2026 00:51:49 +0000 (0:00:03.328) 0:00:12.932 ******** 2026-03-05 00:52:39.351194 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:52:39.351198 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:52:39.351203 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:52:39.351207 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:52:39.351211 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:52:39.351217 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:52:39.351223 | orchestrator | 2026-03-05 00:52:39.351229 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-05 00:52:39.351235 | orchestrator | Thursday 05 March 2026 00:51:50 +0000 (0:00:01.252) 0:00:14.184 ******** 2026-03-05 00:52:39.351242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-05 00:52:39.351446 | orchestrator | 2026-03-05 00:52:39.351450 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:52:39.351454 | orchestrator | Thursday 05 March 2026 00:51:54 +0000 (0:00:04.008) 0:00:18.193 ******** 2026-03-05 00:52:39.351464 | orchestrator | 2026-03-05 00:52:39.351467 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:52:39.351471 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.488) 0:00:18.681 ******** 2026-03-05 00:52:39.351475 | orchestrator | 2026-03-05 00:52:39.351479 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:52:39.351483 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.312) 0:00:18.994 ******** 2026-03-05 00:52:39.351486 | orchestrator | 2026-03-05 00:52:39.351490 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:52:39.351494 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.126) 0:00:19.120 ******** 2026-03-05 00:52:39.351498 | orchestrator | 2026-03-05 00:52:39.351501 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:52:39.351505 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.104) 0:00:19.225 ******** 2026-03-05 00:52:39.351509 | orchestrator | 2026-03-05 00:52:39.351512 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-05 00:52:39.351516 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.118) 0:00:19.343 ******** 2026-03-05 00:52:39.351520 | orchestrator | 2026-03-05 00:52:39.351524 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-05 00:52:39.351527 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.117) 0:00:19.461 ******** 2026-03-05 00:52:39.351531 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:39.351535 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:52:39.351539 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:39.351543 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:52:39.351546 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:39.351550 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:52:39.351554 | orchestrator | 2026-03-05 00:52:39.351558 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-05 00:52:39.351562 | orchestrator | Thursday 05 March 2026 00:52:01 +0000 (0:00:05.436) 0:00:24.897 ******** 2026-03-05 00:52:39.351566 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:52:39.351570 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:52:39.351574 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:52:39.351578 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:52:39.351582 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:52:39.351585 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:52:39.351589 | orchestrator | 2026-03-05 00:52:39.351593 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-05 00:52:39.351597 | orchestrator | Thursday 05 March 2026 00:52:02 +0000 (0:00:01.394) 0:00:26.291 ******** 2026-03-05 00:52:39.351601 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:39.351604 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:52:39.351608 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:52:39.351612 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:39.351616 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:39.351619 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:52:39.351623 | orchestrator | 2026-03-05 00:52:39.351627 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-05 00:52:39.351631 | orchestrator | Thursday 05 March 2026 00:52:11 +0000 (0:00:08.980) 0:00:35.272 ******** 2026-03-05 00:52:39.351637 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-05 00:52:39.351642 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-05 00:52:39.351646 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-05 00:52:39.351650 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-05 00:52:39.351653 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-05 00:52:39.351661 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-05 00:52:39.351668 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-05 00:52:39.351672 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-05 00:52:39.351676 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-05 00:52:39.351680 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-05 00:52:39.351683 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-05 00:52:39.351687 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-05 00:52:39.351691 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:52:39.351695 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:52:39.351698 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:52:39.351702 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:52:39.351706 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:52:39.351709 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-05 00:52:39.351749 | orchestrator | 2026-03-05 00:52:39.351754 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-05 00:52:39.351758 | orchestrator | Thursday 05 March 2026 00:52:20 +0000 (0:00:08.892) 0:00:44.164 ******** 2026-03-05 00:52:39.351762 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-05 00:52:39.351767 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:52:39.351770 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-05 00:52:39.351774 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:52:39.351778 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-05 00:52:39.351782 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:52:39.351786 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-05 00:52:39.351790 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-05 00:52:39.351794 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-05 00:52:39.351797 | orchestrator | 2026-03-05 00:52:39.351801 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-05 00:52:39.351805 | orchestrator | Thursday 05 March 2026 00:52:23 +0000 (0:00:03.208) 0:00:47.373 ******** 2026-03-05 00:52:39.351809 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-05 00:52:39.351813 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-05 00:52:39.351816 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:52:39.351820 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:52:39.351824 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-05 00:52:39.351828 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:52:39.351832 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-05 00:52:39.351835 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-05 00:52:39.351839 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-05 00:52:39.351843 | orchestrator | 2026-03-05 00:52:39.351847 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-05 00:52:39.351855 | orchestrator | Thursday 05 March 2026 00:52:28 +0000 (0:00:04.807) 0:00:52.181 ******** 2026-03-05 00:52:39.351859 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:52:39.351862 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:52:39.351866 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:52:39.351870 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:52:39.351874 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:52:39.351877 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:52:39.351881 | orchestrator | 2026-03-05 00:52:39.351885 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:52:39.351889 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:52:39.351897 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:52:39.351901 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 00:52:39.351905 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 00:52:39.351909 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 00:52:39.351913 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 00:52:39.351916 | orchestrator | 2026-03-05 00:52:39.351920 | orchestrator | 2026-03-05 00:52:39.351927 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:52:39.351931 | orchestrator | Thursday 05 March 2026 00:52:38 +0000 (0:00:10.100) 0:01:02.282 ******** 2026-03-05 00:52:39.351935 | orchestrator | =============================================================================== 2026-03-05 00:52:39.351938 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.08s 2026-03-05 00:52:39.351942 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.89s 2026-03-05 00:52:39.351946 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 5.44s 2026-03-05 00:52:39.351950 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.81s 2026-03-05 00:52:39.351953 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.01s 2026-03-05 00:52:39.351957 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.33s 2026-03-05 00:52:39.351961 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.21s 2026-03-05 00:52:39.351965 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.88s 2026-03-05 00:52:39.351968 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.58s 2026-03-05 00:52:39.351972 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.39s 2026-03-05 00:52:39.351976 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.32s 2026-03-05 00:52:39.351979 | orchestrator | module-load : Load modules ---------------------------------------------- 1.31s 2026-03-05 00:52:39.351983 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.27s 2026-03-05 00:52:39.351988 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.25s 2026-03-05 00:52:39.351992 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.16s 2026-03-05 00:52:39.351997 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.82s 2026-03-05 00:52:39.352001 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2026-03-05 00:52:39.352013 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-03-05 00:52:39.352017 | orchestrator | 2026-03-05 00:52:39 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:39.352022 | orchestrator | 2026-03-05 00:52:39 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:39.352219 | orchestrator | 2026-03-05 00:52:39 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:39.352232 | orchestrator | 2026-03-05 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:42.385473 | orchestrator | 2026-03-05 00:52:42 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:42.385988 | orchestrator | 2026-03-05 00:52:42 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:42.386646 | orchestrator | 2026-03-05 00:52:42 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:42.387262 | orchestrator | 2026-03-05 00:52:42 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:52:42.388224 | orchestrator | 2026-03-05 00:52:42 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:42.388272 | orchestrator | 2026-03-05 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:45.422452 | orchestrator | 2026-03-05 00:52:45 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:45.422649 | orchestrator | 2026-03-05 00:52:45 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:45.423641 | orchestrator | 2026-03-05 00:52:45 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:45.426591 | orchestrator | 2026-03-05 00:52:45 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:52:45.426640 | orchestrator | 2026-03-05 00:52:45 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:45.426646 | orchestrator | 2026-03-05 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:48.456980 | orchestrator | 2026-03-05 00:52:48 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:48.457613 | orchestrator | 2026-03-05 00:52:48 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:48.458390 | orchestrator | 2026-03-05 00:52:48 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:48.459372 | orchestrator | 2026-03-05 00:52:48 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:52:48.461192 | orchestrator | 2026-03-05 00:52:48 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:48.461226 | orchestrator | 2026-03-05 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:51.485016 | orchestrator | 2026-03-05 00:52:51 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:51.486614 | orchestrator | 2026-03-05 00:52:51 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:51.486694 | orchestrator | 2026-03-05 00:52:51 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:51.487522 | orchestrator | 2026-03-05 00:52:51 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:52:51.487705 | orchestrator | 2026-03-05 00:52:51 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:51.487717 | orchestrator | 2026-03-05 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:54.523297 | orchestrator | 2026-03-05 00:52:54 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:54.524535 | orchestrator | 2026-03-05 00:52:54 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:54.527065 | orchestrator | 2026-03-05 00:52:54 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:54.529175 | orchestrator | 2026-03-05 00:52:54 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:52:54.533830 | orchestrator | 2026-03-05 00:52:54 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:54.533897 | orchestrator | 2026-03-05 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:52:57.580438 | orchestrator | 2026-03-05 00:52:57 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:52:57.582998 | orchestrator | 2026-03-05 00:52:57 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:52:57.583598 | orchestrator | 2026-03-05 00:52:57 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:52:57.584662 | orchestrator | 2026-03-05 00:52:57 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:52:57.585332 | orchestrator | 2026-03-05 00:52:57 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:52:57.585368 | orchestrator | 2026-03-05 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:00.631214 | orchestrator | 2026-03-05 00:53:00 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:00.633236 | orchestrator | 2026-03-05 00:53:00 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:00.635234 | orchestrator | 2026-03-05 00:53:00 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:00.636833 | orchestrator | 2026-03-05 00:53:00 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:00.639267 | orchestrator | 2026-03-05 00:53:00 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:00.639350 | orchestrator | 2026-03-05 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:03.678799 | orchestrator | 2026-03-05 00:53:03 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:03.679286 | orchestrator | 2026-03-05 00:53:03 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:03.682235 | orchestrator | 2026-03-05 00:53:03 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:03.682891 | orchestrator | 2026-03-05 00:53:03 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:03.684906 | orchestrator | 2026-03-05 00:53:03 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:03.684949 | orchestrator | 2026-03-05 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:06.736765 | orchestrator | 2026-03-05 00:53:06 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:06.737595 | orchestrator | 2026-03-05 00:53:06 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:06.738593 | orchestrator | 2026-03-05 00:53:06 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:06.739771 | orchestrator | 2026-03-05 00:53:06 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:06.740421 | orchestrator | 2026-03-05 00:53:06 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:06.742296 | orchestrator | 2026-03-05 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:09.789666 | orchestrator | 2026-03-05 00:53:09 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:09.791849 | orchestrator | 2026-03-05 00:53:09 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:09.794166 | orchestrator | 2026-03-05 00:53:09 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:09.795568 | orchestrator | 2026-03-05 00:53:09 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:09.797612 | orchestrator | 2026-03-05 00:53:09 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:09.797691 | orchestrator | 2026-03-05 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:12.838411 | orchestrator | 2026-03-05 00:53:12 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:12.838631 | orchestrator | 2026-03-05 00:53:12 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:12.838660 | orchestrator | 2026-03-05 00:53:12 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:12.838679 | orchestrator | 2026-03-05 00:53:12 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:12.838697 | orchestrator | 2026-03-05 00:53:12 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:12.838715 | orchestrator | 2026-03-05 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:15.868889 | orchestrator | 2026-03-05 00:53:15 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:15.869278 | orchestrator | 2026-03-05 00:53:15 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:15.870380 | orchestrator | 2026-03-05 00:53:15 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:15.871880 | orchestrator | 2026-03-05 00:53:15 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:15.873784 | orchestrator | 2026-03-05 00:53:15 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:15.873876 | orchestrator | 2026-03-05 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:18.910548 | orchestrator | 2026-03-05 00:53:18 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:18.910634 | orchestrator | 2026-03-05 00:53:18 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:18.910653 | orchestrator | 2026-03-05 00:53:18 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:18.911641 | orchestrator | 2026-03-05 00:53:18 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:18.912225 | orchestrator | 2026-03-05 00:53:18 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:18.912266 | orchestrator | 2026-03-05 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:21.951148 | orchestrator | 2026-03-05 00:53:21 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:21.951609 | orchestrator | 2026-03-05 00:53:21 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:21.952319 | orchestrator | 2026-03-05 00:53:21 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:21.952968 | orchestrator | 2026-03-05 00:53:21 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:21.953766 | orchestrator | 2026-03-05 00:53:21 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:21.953799 | orchestrator | 2026-03-05 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:25.089600 | orchestrator | 2026-03-05 00:53:25 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:25.090642 | orchestrator | 2026-03-05 00:53:25 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:25.091674 | orchestrator | 2026-03-05 00:53:25 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:25.092755 | orchestrator | 2026-03-05 00:53:25 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:25.095521 | orchestrator | 2026-03-05 00:53:25 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:25.095573 | orchestrator | 2026-03-05 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:28.136254 | orchestrator | 2026-03-05 00:53:28 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:28.139004 | orchestrator | 2026-03-05 00:53:28 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:28.140330 | orchestrator | 2026-03-05 00:53:28 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:28.141396 | orchestrator | 2026-03-05 00:53:28 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:28.142543 | orchestrator | 2026-03-05 00:53:28 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:28.142570 | orchestrator | 2026-03-05 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:31.194826 | orchestrator | 2026-03-05 00:53:31 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:31.195596 | orchestrator | 2026-03-05 00:53:31 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:31.195736 | orchestrator | 2026-03-05 00:53:31 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:31.196602 | orchestrator | 2026-03-05 00:53:31 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:31.197298 | orchestrator | 2026-03-05 00:53:31 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:31.197420 | orchestrator | 2026-03-05 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:34.230234 | orchestrator | 2026-03-05 00:53:34 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:34.230663 | orchestrator | 2026-03-05 00:53:34 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:34.231390 | orchestrator | 2026-03-05 00:53:34 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:34.232080 | orchestrator | 2026-03-05 00:53:34 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:34.232715 | orchestrator | 2026-03-05 00:53:34 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:34.232814 | orchestrator | 2026-03-05 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:37.265179 | orchestrator | 2026-03-05 00:53:37 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:37.267758 | orchestrator | 2026-03-05 00:53:37 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:37.270209 | orchestrator | 2026-03-05 00:53:37 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:37.270930 | orchestrator | 2026-03-05 00:53:37 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:37.272129 | orchestrator | 2026-03-05 00:53:37 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:37.272181 | orchestrator | 2026-03-05 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:40.308304 | orchestrator | 2026-03-05 00:53:40 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:40.308823 | orchestrator | 2026-03-05 00:53:40 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:40.309540 | orchestrator | 2026-03-05 00:53:40 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:40.310326 | orchestrator | 2026-03-05 00:53:40 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:40.312098 | orchestrator | 2026-03-05 00:53:40 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:40.312143 | orchestrator | 2026-03-05 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:43.357190 | orchestrator | 2026-03-05 00:53:43 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:43.358463 | orchestrator | 2026-03-05 00:53:43 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:43.359178 | orchestrator | 2026-03-05 00:53:43 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:43.359999 | orchestrator | 2026-03-05 00:53:43 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:43.360695 | orchestrator | 2026-03-05 00:53:43 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:43.360759 | orchestrator | 2026-03-05 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:46.396497 | orchestrator | 2026-03-05 00:53:46 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:46.397060 | orchestrator | 2026-03-05 00:53:46 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:46.397801 | orchestrator | 2026-03-05 00:53:46 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:46.398445 | orchestrator | 2026-03-05 00:53:46 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:46.399066 | orchestrator | 2026-03-05 00:53:46 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:46.399083 | orchestrator | 2026-03-05 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:49.462236 | orchestrator | 2026-03-05 00:53:49 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:49.462614 | orchestrator | 2026-03-05 00:53:49 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state STARTED 2026-03-05 00:53:49.463909 | orchestrator | 2026-03-05 00:53:49 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:49.464997 | orchestrator | 2026-03-05 00:53:49 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:49.466432 | orchestrator | 2026-03-05 00:53:49 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:49.466494 | orchestrator | 2026-03-05 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:52.494388 | orchestrator | 2026-03-05 00:53:52 | INFO  | Task de27b448-b914-4251-b55a-905ba5c1329a is in state STARTED 2026-03-05 00:53:52.497572 | orchestrator | 2026-03-05 00:53:52 | INFO  | Task 6fe43a46-e4e3-477a-8c17-1c30e1f643dd is in state STARTED 2026-03-05 00:53:52.498423 | orchestrator | 2026-03-05 00:53:52 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:52.503860 | orchestrator | 2026-03-05 00:53:52.504000 | orchestrator | 2026-03-05 00:53:52 | INFO  | Task 36611c07-d14a-47a5-b0d9-cc4a9d55cd93 is in state SUCCESS 2026-03-05 00:53:52.506401 | orchestrator | 2026-03-05 00:53:52.506475 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-05 00:53:52.506493 | orchestrator | 2026-03-05 00:53:52.506505 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-05 00:53:52.506518 | orchestrator | Thursday 05 March 2026 00:49:00 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-03-05 00:53:52.506531 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:53:52.506545 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:53:52.506556 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:53:52.506568 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.506579 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.506592 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.506717 | orchestrator | 2026-03-05 00:53:52.506731 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-05 00:53:52.506739 | orchestrator | Thursday 05 March 2026 00:49:01 +0000 (0:00:00.621) 0:00:00.788 ******** 2026-03-05 00:53:52.506746 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.506755 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.506762 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.506770 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.506777 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.506784 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.506791 | orchestrator | 2026-03-05 00:53:52.506799 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-05 00:53:52.506806 | orchestrator | Thursday 05 March 2026 00:49:02 +0000 (0:00:00.673) 0:00:01.461 ******** 2026-03-05 00:53:52.506814 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.506821 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.506828 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.506835 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.506842 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.506850 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.506857 | orchestrator | 2026-03-05 00:53:52.506890 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-05 00:53:52.506899 | orchestrator | Thursday 05 March 2026 00:49:02 +0000 (0:00:00.740) 0:00:02.202 ******** 2026-03-05 00:53:52.506907 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:53:52.506914 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:53:52.506922 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.506929 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.506936 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.506943 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:53:52.506951 | orchestrator | 2026-03-05 00:53:52.506958 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-05 00:53:52.506966 | orchestrator | Thursday 05 March 2026 00:49:05 +0000 (0:00:02.812) 0:00:05.015 ******** 2026-03-05 00:53:52.506975 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:53:52.506983 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:53:52.506992 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:53:52.507000 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.507029 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.507037 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.507046 | orchestrator | 2026-03-05 00:53:52.507055 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-05 00:53:52.507063 | orchestrator | Thursday 05 March 2026 00:49:06 +0000 (0:00:01.112) 0:00:06.127 ******** 2026-03-05 00:53:52.507072 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:53:52.507081 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:53:52.507090 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:53:52.507100 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.507109 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.507118 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.507126 | orchestrator | 2026-03-05 00:53:52.507135 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-05 00:53:52.507144 | orchestrator | Thursday 05 March 2026 00:49:07 +0000 (0:00:00.957) 0:00:07.085 ******** 2026-03-05 00:53:52.507152 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.507160 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.507167 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.507175 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.507182 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.507190 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.507197 | orchestrator | 2026-03-05 00:53:52.507204 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-05 00:53:52.507212 | orchestrator | Thursday 05 March 2026 00:49:08 +0000 (0:00:00.830) 0:00:07.915 ******** 2026-03-05 00:53:52.507219 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.507226 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.507233 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.507241 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.507248 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.507255 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.507262 | orchestrator | 2026-03-05 00:53:52.507270 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-05 00:53:52.507277 | orchestrator | Thursday 05 March 2026 00:49:09 +0000 (0:00:00.581) 0:00:08.497 ******** 2026-03-05 00:53:52.507284 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:53:52.507292 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:53:52.507299 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.507314 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:53:52.507322 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:53:52.507329 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.507337 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:53:52.507344 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:53:52.507351 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.507359 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:53:52.507379 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:53:52.507434 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.507442 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:53:52.507449 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:53:52.507456 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.507464 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 00:53:52.507471 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 00:53:52.507479 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.507493 | orchestrator | 2026-03-05 00:53:52.507500 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-05 00:53:52.507507 | orchestrator | Thursday 05 March 2026 00:49:09 +0000 (0:00:00.736) 0:00:09.233 ******** 2026-03-05 00:53:52.507515 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.507522 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.507529 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.507537 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.507544 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.507551 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.507558 | orchestrator | 2026-03-05 00:53:52.507566 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-05 00:53:52.507574 | orchestrator | Thursday 05 March 2026 00:49:10 +0000 (0:00:01.179) 0:00:10.413 ******** 2026-03-05 00:53:52.507582 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:53:52.507589 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:53:52.507596 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:53:52.507604 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.507611 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.507618 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.507626 | orchestrator | 2026-03-05 00:53:52.507633 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-05 00:53:52.507640 | orchestrator | Thursday 05 March 2026 00:49:12 +0000 (0:00:01.100) 0:00:11.514 ******** 2026-03-05 00:53:52.507647 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.507655 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:53:52.507662 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:53:52.507669 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:53:52.507676 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.507684 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.507691 | orchestrator | 2026-03-05 00:53:52.507698 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-05 00:53:52.507705 | orchestrator | Thursday 05 March 2026 00:49:18 +0000 (0:00:06.083) 0:00:17.598 ******** 2026-03-05 00:53:52.507713 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.507720 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.507733 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.507745 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.507758 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.507771 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.507784 | orchestrator | 2026-03-05 00:53:52.507796 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-05 00:53:52.507808 | orchestrator | Thursday 05 March 2026 00:49:20 +0000 (0:00:01.883) 0:00:19.481 ******** 2026-03-05 00:53:52.507820 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.507832 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.507845 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.507858 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.507907 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.507920 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.507930 | orchestrator | 2026-03-05 00:53:52.507938 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-05 00:53:52.507947 | orchestrator | Thursday 05 March 2026 00:49:22 +0000 (0:00:02.763) 0:00:22.245 ******** 2026-03-05 00:53:52.507958 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.507970 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.507983 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.507995 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.508008 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.508020 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.508033 | orchestrator | 2026-03-05 00:53:52.508046 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-05 00:53:52.508067 | orchestrator | Thursday 05 March 2026 00:49:24 +0000 (0:00:01.459) 0:00:23.704 ******** 2026-03-05 00:53:52.508079 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-05 00:53:52.508087 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-05 00:53:52.508095 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.508102 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-05 00:53:52.508109 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-05 00:53:52.508117 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.508124 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-05 00:53:52.508131 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-05 00:53:52.508139 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.508146 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-05 00:53:52.508159 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-05 00:53:52.508167 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-05 00:53:52.508174 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-05 00:53:52.508181 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.508188 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.508196 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-05 00:53:52.508203 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-05 00:53:52.508210 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.508218 | orchestrator | 2026-03-05 00:53:52.508225 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-05 00:53:52.508242 | orchestrator | Thursday 05 March 2026 00:49:25 +0000 (0:00:01.494) 0:00:25.198 ******** 2026-03-05 00:53:52.508250 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.508257 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.508264 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.508272 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.508279 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.508286 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.508294 | orchestrator | 2026-03-05 00:53:52.508301 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-05 00:53:52.508309 | orchestrator | Thursday 05 March 2026 00:49:26 +0000 (0:00:00.769) 0:00:25.968 ******** 2026-03-05 00:53:52.508316 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.508324 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.508331 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.508338 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.508345 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.508353 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.508360 | orchestrator | 2026-03-05 00:53:52.508367 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-05 00:53:52.508375 | orchestrator | 2026-03-05 00:53:52.508382 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-05 00:53:52.508390 | orchestrator | Thursday 05 March 2026 00:49:29 +0000 (0:00:03.314) 0:00:29.282 ******** 2026-03-05 00:53:52.508397 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.508404 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.508412 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.508419 | orchestrator | 2026-03-05 00:53:52.508426 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-05 00:53:52.508434 | orchestrator | Thursday 05 March 2026 00:49:32 +0000 (0:00:02.756) 0:00:32.039 ******** 2026-03-05 00:53:52.508441 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.508448 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.508456 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.508463 | orchestrator | 2026-03-05 00:53:52.508470 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-05 00:53:52.508477 | orchestrator | Thursday 05 March 2026 00:49:34 +0000 (0:00:01.403) 0:00:33.443 ******** 2026-03-05 00:53:52.508490 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.508497 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.508504 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.508511 | orchestrator | 2026-03-05 00:53:52.508519 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-05 00:53:52.508526 | orchestrator | Thursday 05 March 2026 00:49:35 +0000 (0:00:00.992) 0:00:34.436 ******** 2026-03-05 00:53:52.508533 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.508540 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.508547 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.508555 | orchestrator | 2026-03-05 00:53:52.508562 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-05 00:53:52.508569 | orchestrator | Thursday 05 March 2026 00:49:35 +0000 (0:00:00.929) 0:00:35.366 ******** 2026-03-05 00:53:52.508577 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.508584 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.508591 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.508599 | orchestrator | 2026-03-05 00:53:52.508606 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-05 00:53:52.508614 | orchestrator | Thursday 05 March 2026 00:49:36 +0000 (0:00:00.687) 0:00:36.053 ******** 2026-03-05 00:53:52.508621 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.508628 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.508636 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.508643 | orchestrator | 2026-03-05 00:53:52.508650 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-05 00:53:52.508658 | orchestrator | Thursday 05 March 2026 00:49:38 +0000 (0:00:01.611) 0:00:37.665 ******** 2026-03-05 00:53:52.508665 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.508672 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.508680 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.508687 | orchestrator | 2026-03-05 00:53:52.508695 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-05 00:53:52.508702 | orchestrator | Thursday 05 March 2026 00:49:40 +0000 (0:00:02.231) 0:00:39.896 ******** 2026-03-05 00:53:52.508709 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:53:52.508717 | orchestrator | 2026-03-05 00:53:52.508724 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-05 00:53:52.508732 | orchestrator | Thursday 05 March 2026 00:49:42 +0000 (0:00:01.790) 0:00:41.687 ******** 2026-03-05 00:53:52.508739 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.508746 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.508754 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.508761 | orchestrator | 2026-03-05 00:53:52.508772 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-05 00:53:52.508784 | orchestrator | Thursday 05 March 2026 00:49:48 +0000 (0:00:06.310) 0:00:47.997 ******** 2026-03-05 00:53:52.508803 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.508816 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.508827 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.508838 | orchestrator | 2026-03-05 00:53:52.508850 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-05 00:53:52.508885 | orchestrator | Thursday 05 March 2026 00:49:49 +0000 (0:00:00.976) 0:00:48.974 ******** 2026-03-05 00:53:52.508899 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.508910 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.508922 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.508935 | orchestrator | 2026-03-05 00:53:52.508948 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-05 00:53:52.508960 | orchestrator | Thursday 05 March 2026 00:49:50 +0000 (0:00:01.183) 0:00:50.158 ******** 2026-03-05 00:53:52.508973 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.508983 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.509032 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.509041 | orchestrator | 2026-03-05 00:53:52.509049 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-05 00:53:52.509064 | orchestrator | Thursday 05 March 2026 00:49:52 +0000 (0:00:01.978) 0:00:52.136 ******** 2026-03-05 00:53:52.509072 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.509079 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.509086 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.509093 | orchestrator | 2026-03-05 00:53:52.509101 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-05 00:53:52.509108 | orchestrator | Thursday 05 March 2026 00:49:54 +0000 (0:00:01.791) 0:00:53.928 ******** 2026-03-05 00:53:52.509115 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.509123 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.509130 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.509137 | orchestrator | 2026-03-05 00:53:52.509144 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-05 00:53:52.509152 | orchestrator | Thursday 05 March 2026 00:49:55 +0000 (0:00:00.658) 0:00:54.586 ******** 2026-03-05 00:53:52.509159 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.509166 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.509174 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.509181 | orchestrator | 2026-03-05 00:53:52.509188 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-05 00:53:52.509195 | orchestrator | Thursday 05 March 2026 00:49:57 +0000 (0:00:02.658) 0:00:57.244 ******** 2026-03-05 00:53:52.509203 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.509210 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.509217 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.509225 | orchestrator | 2026-03-05 00:53:52.509232 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-05 00:53:52.509239 | orchestrator | Thursday 05 March 2026 00:50:00 +0000 (0:00:03.032) 0:01:00.277 ******** 2026-03-05 00:53:52.509247 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.509254 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.509261 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.509268 | orchestrator | 2026-03-05 00:53:52.509276 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-05 00:53:52.509283 | orchestrator | Thursday 05 March 2026 00:50:02 +0000 (0:00:01.215) 0:01:01.492 ******** 2026-03-05 00:53:52.509291 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-05 00:53:52.509299 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-05 00:53:52.509307 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-05 00:53:52.509314 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-05 00:53:52.509321 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-05 00:53:52.509329 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-05 00:53:52.509336 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-05 00:53:52.509343 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-05 00:53:52.509351 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-05 00:53:52.509364 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-05 00:53:52.509371 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-05 00:53:52.509379 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-05 00:53:52.509386 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.509393 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.509401 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.509408 | orchestrator | 2026-03-05 00:53:52.509415 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-05 00:53:52.509423 | orchestrator | Thursday 05 March 2026 00:50:45 +0000 (0:00:43.730) 0:01:45.223 ******** 2026-03-05 00:53:52.509430 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.509437 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.509444 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.509452 | orchestrator | 2026-03-05 00:53:52.509464 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-05 00:53:52.509471 | orchestrator | Thursday 05 March 2026 00:50:46 +0000 (0:00:00.315) 0:01:45.538 ******** 2026-03-05 00:53:52.509479 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.509486 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.509493 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.509500 | orchestrator | 2026-03-05 00:53:52.509508 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-05 00:53:52.509515 | orchestrator | Thursday 05 March 2026 00:50:47 +0000 (0:00:01.139) 0:01:46.678 ******** 2026-03-05 00:53:52.509522 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.509530 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.509537 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.509544 | orchestrator | 2026-03-05 00:53:52.509556 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-05 00:53:52.509563 | orchestrator | Thursday 05 March 2026 00:50:48 +0000 (0:00:01.617) 0:01:48.295 ******** 2026-03-05 00:53:52.509571 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.509578 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.509585 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.509593 | orchestrator | 2026-03-05 00:53:52.509600 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-05 00:53:52.509607 | orchestrator | Thursday 05 March 2026 00:51:13 +0000 (0:00:24.965) 0:02:13.260 ******** 2026-03-05 00:53:52.509615 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.509622 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.509629 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.509636 | orchestrator | 2026-03-05 00:53:52.509644 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-05 00:53:52.509651 | orchestrator | Thursday 05 March 2026 00:51:14 +0000 (0:00:00.671) 0:02:13.932 ******** 2026-03-05 00:53:52.509658 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.509666 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.509673 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.509680 | orchestrator | 2026-03-05 00:53:52.509687 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-05 00:53:52.509695 | orchestrator | Thursday 05 March 2026 00:51:15 +0000 (0:00:00.651) 0:02:14.584 ******** 2026-03-05 00:53:52.509702 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.509709 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.509717 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.509724 | orchestrator | 2026-03-05 00:53:52.509731 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-05 00:53:52.509739 | orchestrator | Thursday 05 March 2026 00:51:15 +0000 (0:00:00.768) 0:02:15.353 ******** 2026-03-05 00:53:52.509751 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.509758 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.509766 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.509773 | orchestrator | 2026-03-05 00:53:52.509780 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-05 00:53:52.509787 | orchestrator | Thursday 05 March 2026 00:51:16 +0000 (0:00:00.983) 0:02:16.337 ******** 2026-03-05 00:53:52.509795 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.509802 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.509809 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.509816 | orchestrator | 2026-03-05 00:53:52.509824 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-05 00:53:52.509831 | orchestrator | Thursday 05 March 2026 00:51:17 +0000 (0:00:00.384) 0:02:16.722 ******** 2026-03-05 00:53:52.509839 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.509846 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.509854 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.509861 | orchestrator | 2026-03-05 00:53:52.509890 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-05 00:53:52.509898 | orchestrator | Thursday 05 March 2026 00:51:17 +0000 (0:00:00.674) 0:02:17.396 ******** 2026-03-05 00:53:52.509905 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.509912 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.509920 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.509927 | orchestrator | 2026-03-05 00:53:52.509934 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-05 00:53:52.509942 | orchestrator | Thursday 05 March 2026 00:51:18 +0000 (0:00:00.695) 0:02:18.091 ******** 2026-03-05 00:53:52.509949 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.509956 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.509963 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.509971 | orchestrator | 2026-03-05 00:53:52.509978 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-05 00:53:52.509985 | orchestrator | Thursday 05 March 2026 00:51:19 +0000 (0:00:01.167) 0:02:19.258 ******** 2026-03-05 00:53:52.509993 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:53:52.510000 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:53:52.510007 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:53:52.510056 | orchestrator | 2026-03-05 00:53:52.510065 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-05 00:53:52.510072 | orchestrator | Thursday 05 March 2026 00:51:20 +0000 (0:00:01.094) 0:02:20.353 ******** 2026-03-05 00:53:52.510080 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.510087 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.510094 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.510102 | orchestrator | 2026-03-05 00:53:52.510109 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-05 00:53:52.510116 | orchestrator | Thursday 05 March 2026 00:51:21 +0000 (0:00:00.328) 0:02:20.681 ******** 2026-03-05 00:53:52.510124 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.510131 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.510138 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.510145 | orchestrator | 2026-03-05 00:53:52.510153 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-05 00:53:52.510160 | orchestrator | Thursday 05 March 2026 00:51:21 +0000 (0:00:00.298) 0:02:20.980 ******** 2026-03-05 00:53:52.510168 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.510175 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.510182 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.510189 | orchestrator | 2026-03-05 00:53:52.510197 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-05 00:53:52.510209 | orchestrator | Thursday 05 March 2026 00:51:22 +0000 (0:00:00.896) 0:02:21.876 ******** 2026-03-05 00:53:52.510216 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.510224 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.510236 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.510243 | orchestrator | 2026-03-05 00:53:52.510251 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-05 00:53:52.510258 | orchestrator | Thursday 05 March 2026 00:51:23 +0000 (0:00:00.707) 0:02:22.584 ******** 2026-03-05 00:53:52.510266 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-05 00:53:52.510279 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-05 00:53:52.510287 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-05 00:53:52.510294 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-05 00:53:52.510301 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-05 00:53:52.510309 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-05 00:53:52.510316 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-05 00:53:52.510323 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-05 00:53:52.510331 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-05 00:53:52.510338 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-05 00:53:52.510345 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-05 00:53:52.510352 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-05 00:53:52.510360 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-05 00:53:52.510367 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-05 00:53:52.510374 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-05 00:53:52.510382 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-05 00:53:52.510389 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-05 00:53:52.510396 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-05 00:53:52.510404 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-05 00:53:52.510411 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-05 00:53:52.510419 | orchestrator | 2026-03-05 00:53:52.510426 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-05 00:53:52.510433 | orchestrator | 2026-03-05 00:53:52.510441 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-05 00:53:52.510448 | orchestrator | Thursday 05 March 2026 00:51:26 +0000 (0:00:03.531) 0:02:26.116 ******** 2026-03-05 00:53:52.510455 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:53:52.510463 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:53:52.510470 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:53:52.510477 | orchestrator | 2026-03-05 00:53:52.510485 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-05 00:53:52.510492 | orchestrator | Thursday 05 March 2026 00:51:27 +0000 (0:00:00.518) 0:02:26.634 ******** 2026-03-05 00:53:52.510499 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:53:52.510507 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:53:52.510514 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:53:52.510521 | orchestrator | 2026-03-05 00:53:52.510528 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-05 00:53:52.510542 | orchestrator | Thursday 05 March 2026 00:51:27 +0000 (0:00:00.665) 0:02:27.300 ******** 2026-03-05 00:53:52.510549 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:53:52.510556 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:53:52.510564 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:53:52.510571 | orchestrator | 2026-03-05 00:53:52.510578 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-05 00:53:52.510586 | orchestrator | Thursday 05 March 2026 00:51:28 +0000 (0:00:00.345) 0:02:27.646 ******** 2026-03-05 00:53:52.510593 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 00:53:52.510600 | orchestrator | 2026-03-05 00:53:52.510608 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-05 00:53:52.510615 | orchestrator | Thursday 05 March 2026 00:51:28 +0000 (0:00:00.695) 0:02:28.341 ******** 2026-03-05 00:53:52.510622 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.510630 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.510637 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.510644 | orchestrator | 2026-03-05 00:53:52.510651 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-05 00:53:52.510659 | orchestrator | Thursday 05 March 2026 00:51:29 +0000 (0:00:00.316) 0:02:28.657 ******** 2026-03-05 00:53:52.510666 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.510674 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.510681 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.510688 | orchestrator | 2026-03-05 00:53:52.510703 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-05 00:53:52.510711 | orchestrator | Thursday 05 March 2026 00:51:29 +0000 (0:00:00.305) 0:02:28.963 ******** 2026-03-05 00:53:52.510719 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.510726 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.510733 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.510741 | orchestrator | 2026-03-05 00:53:52.510748 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-05 00:53:52.510755 | orchestrator | Thursday 05 March 2026 00:51:29 +0000 (0:00:00.325) 0:02:29.289 ******** 2026-03-05 00:53:52.510763 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:53:52.510770 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:53:52.510777 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:53:52.510785 | orchestrator | 2026-03-05 00:53:52.510797 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-05 00:53:52.510804 | orchestrator | Thursday 05 March 2026 00:51:30 +0000 (0:00:00.937) 0:02:30.227 ******** 2026-03-05 00:53:52.510812 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:53:52.510819 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:53:52.510826 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:53:52.510833 | orchestrator | 2026-03-05 00:53:52.510841 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-05 00:53:52.510848 | orchestrator | Thursday 05 March 2026 00:51:31 +0000 (0:00:01.188) 0:02:31.416 ******** 2026-03-05 00:53:52.510855 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:53:52.510862 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:53:52.510884 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:53:52.510891 | orchestrator | 2026-03-05 00:53:52.510899 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-05 00:53:52.510906 | orchestrator | Thursday 05 March 2026 00:51:33 +0000 (0:00:01.324) 0:02:32.741 ******** 2026-03-05 00:53:52.510913 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:53:52.510920 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:53:52.510928 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:53:52.510935 | orchestrator | 2026-03-05 00:53:52.510942 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-05 00:53:52.510949 | orchestrator | 2026-03-05 00:53:52.510957 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-05 00:53:52.510969 | orchestrator | Thursday 05 March 2026 00:51:46 +0000 (0:00:12.853) 0:02:45.594 ******** 2026-03-05 00:53:52.510977 | orchestrator | ok: [testbed-manager] 2026-03-05 00:53:52.510984 | orchestrator | 2026-03-05 00:53:52.510991 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-05 00:53:52.510999 | orchestrator | Thursday 05 March 2026 00:51:47 +0000 (0:00:00.957) 0:02:46.552 ******** 2026-03-05 00:53:52.511006 | orchestrator | changed: [testbed-manager] 2026-03-05 00:53:52.511013 | orchestrator | 2026-03-05 00:53:52.511021 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-05 00:53:52.511028 | orchestrator | Thursday 05 March 2026 00:51:47 +0000 (0:00:00.629) 0:02:47.182 ******** 2026-03-05 00:53:52.511035 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-05 00:53:52.511043 | orchestrator | 2026-03-05 00:53:52.511050 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-05 00:53:52.511057 | orchestrator | Thursday 05 March 2026 00:51:48 +0000 (0:00:00.681) 0:02:47.863 ******** 2026-03-05 00:53:52.511065 | orchestrator | changed: [testbed-manager] 2026-03-05 00:53:52.511072 | orchestrator | 2026-03-05 00:53:52.511079 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-05 00:53:52.511086 | orchestrator | Thursday 05 March 2026 00:51:49 +0000 (0:00:00.932) 0:02:48.796 ******** 2026-03-05 00:53:52.511094 | orchestrator | changed: [testbed-manager] 2026-03-05 00:53:52.511101 | orchestrator | 2026-03-05 00:53:52.511109 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-05 00:53:52.511116 | orchestrator | Thursday 05 March 2026 00:51:49 +0000 (0:00:00.598) 0:02:49.394 ******** 2026-03-05 00:53:52.511123 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 00:53:52.511131 | orchestrator | 2026-03-05 00:53:52.511138 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-05 00:53:52.511145 | orchestrator | Thursday 05 March 2026 00:51:51 +0000 (0:00:01.952) 0:02:51.347 ******** 2026-03-05 00:53:52.511152 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 00:53:52.511160 | orchestrator | 2026-03-05 00:53:52.511167 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-05 00:53:52.511179 | orchestrator | Thursday 05 March 2026 00:51:52 +0000 (0:00:00.858) 0:02:52.205 ******** 2026-03-05 00:53:52.511192 | orchestrator | changed: [testbed-manager] 2026-03-05 00:53:52.511204 | orchestrator | 2026-03-05 00:53:52.511216 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-05 00:53:52.511227 | orchestrator | Thursday 05 March 2026 00:51:53 +0000 (0:00:00.874) 0:02:53.080 ******** 2026-03-05 00:53:52.511238 | orchestrator | changed: [testbed-manager] 2026-03-05 00:53:52.511250 | orchestrator | 2026-03-05 00:53:52.511260 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-05 00:53:52.511272 | orchestrator | 2026-03-05 00:53:52.511284 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-05 00:53:52.511296 | orchestrator | Thursday 05 March 2026 00:51:54 +0000 (0:00:00.514) 0:02:53.594 ******** 2026-03-05 00:53:52.511308 | orchestrator | ok: [testbed-manager] 2026-03-05 00:53:52.511320 | orchestrator | 2026-03-05 00:53:52.511333 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-05 00:53:52.511345 | orchestrator | Thursday 05 March 2026 00:51:54 +0000 (0:00:00.148) 0:02:53.743 ******** 2026-03-05 00:53:52.511357 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:53:52.511370 | orchestrator | 2026-03-05 00:53:52.511380 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-05 00:53:52.511387 | orchestrator | Thursday 05 March 2026 00:51:54 +0000 (0:00:00.271) 0:02:54.014 ******** 2026-03-05 00:53:52.511395 | orchestrator | ok: [testbed-manager] 2026-03-05 00:53:52.511402 | orchestrator | 2026-03-05 00:53:52.511414 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-05 00:53:52.511467 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.896) 0:02:54.911 ******** 2026-03-05 00:53:52.511475 | orchestrator | ok: [testbed-manager] 2026-03-05 00:53:52.511483 | orchestrator | 2026-03-05 00:53:52.511490 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-05 00:53:52.511497 | orchestrator | Thursday 05 March 2026 00:51:57 +0000 (0:00:01.722) 0:02:56.633 ******** 2026-03-05 00:53:52.511504 | orchestrator | changed: [testbed-manager] 2026-03-05 00:53:52.511511 | orchestrator | 2026-03-05 00:53:52.511519 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-05 00:53:52.511526 | orchestrator | Thursday 05 March 2026 00:51:59 +0000 (0:00:02.001) 0:02:58.635 ******** 2026-03-05 00:53:52.511533 | orchestrator | ok: [testbed-manager] 2026-03-05 00:53:52.511540 | orchestrator | 2026-03-05 00:53:52.511554 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-05 00:53:52.511562 | orchestrator | Thursday 05 March 2026 00:51:59 +0000 (0:00:00.475) 0:02:59.111 ******** 2026-03-05 00:53:52.511569 | orchestrator | changed: [testbed-manager] 2026-03-05 00:53:52.511577 | orchestrator | 2026-03-05 00:53:52.511584 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-05 00:53:52.511591 | orchestrator | Thursday 05 March 2026 00:52:07 +0000 (0:00:07.945) 0:03:07.056 ******** 2026-03-05 00:53:52.511598 | orchestrator | changed: [testbed-manager] 2026-03-05 00:53:52.511605 | orchestrator | 2026-03-05 00:53:52.511613 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-05 00:53:52.511620 | orchestrator | Thursday 05 March 2026 00:52:24 +0000 (0:00:17.154) 0:03:24.211 ******** 2026-03-05 00:53:52.511627 | orchestrator | ok: [testbed-manager] 2026-03-05 00:53:52.511634 | orchestrator | 2026-03-05 00:53:52.511641 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-05 00:53:52.511649 | orchestrator | 2026-03-05 00:53:52.511656 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-05 00:53:52.511663 | orchestrator | Thursday 05 March 2026 00:52:25 +0000 (0:00:00.623) 0:03:24.835 ******** 2026-03-05 00:53:52.511670 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.511677 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.511685 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.511692 | orchestrator | 2026-03-05 00:53:52.511699 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-05 00:53:52.511706 | orchestrator | Thursday 05 March 2026 00:52:25 +0000 (0:00:00.334) 0:03:25.170 ******** 2026-03-05 00:53:52.511713 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.511720 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.511727 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.511735 | orchestrator | 2026-03-05 00:53:52.511742 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-05 00:53:52.511749 | orchestrator | Thursday 05 March 2026 00:52:26 +0000 (0:00:00.454) 0:03:25.625 ******** 2026-03-05 00:53:52.511756 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:53:52.511764 | orchestrator | 2026-03-05 00:53:52.511771 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-05 00:53:52.511778 | orchestrator | Thursday 05 March 2026 00:52:27 +0000 (0:00:00.896) 0:03:26.521 ******** 2026-03-05 00:53:52.511785 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 00:53:52.511793 | orchestrator | 2026-03-05 00:53:52.511800 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-05 00:53:52.511807 | orchestrator | Thursday 05 March 2026 00:52:28 +0000 (0:00:01.159) 0:03:27.681 ******** 2026-03-05 00:53:52.511815 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:53:52.511822 | orchestrator | 2026-03-05 00:53:52.511830 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-05 00:53:52.511837 | orchestrator | Thursday 05 March 2026 00:52:29 +0000 (0:00:01.023) 0:03:28.704 ******** 2026-03-05 00:53:52.511844 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.511860 | orchestrator | 2026-03-05 00:53:52.512035 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-05 00:53:52.512061 | orchestrator | Thursday 05 March 2026 00:52:29 +0000 (0:00:00.198) 0:03:28.902 ******** 2026-03-05 00:53:52.512070 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:53:52.512079 | orchestrator | 2026-03-05 00:53:52.512088 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-05 00:53:52.512097 | orchestrator | Thursday 05 March 2026 00:52:30 +0000 (0:00:01.134) 0:03:30.038 ******** 2026-03-05 00:53:52.512106 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.512114 | orchestrator | 2026-03-05 00:53:52.512123 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-05 00:53:52.512132 | orchestrator | Thursday 05 March 2026 00:52:30 +0000 (0:00:00.134) 0:03:30.173 ******** 2026-03-05 00:53:52.512140 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.512149 | orchestrator | 2026-03-05 00:53:52.512158 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-05 00:53:52.512166 | orchestrator | Thursday 05 March 2026 00:52:30 +0000 (0:00:00.111) 0:03:30.284 ******** 2026-03-05 00:53:52.512175 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.512183 | orchestrator | 2026-03-05 00:53:52.512192 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-05 00:53:52.512201 | orchestrator | Thursday 05 March 2026 00:52:31 +0000 (0:00:00.144) 0:03:30.429 ******** 2026-03-05 00:53:52.512210 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.512218 | orchestrator | 2026-03-05 00:53:52.512227 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-05 00:53:52.512235 | orchestrator | Thursday 05 March 2026 00:52:31 +0000 (0:00:00.148) 0:03:30.578 ******** 2026-03-05 00:53:52.512245 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 00:53:52.512254 | orchestrator | 2026-03-05 00:53:52.512262 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-05 00:53:52.512271 | orchestrator | Thursday 05 March 2026 00:52:36 +0000 (0:00:05.478) 0:03:36.056 ******** 2026-03-05 00:53:52.512287 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-05 00:53:52.512296 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-05 00:53:52.512305 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-05 00:53:52.512314 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-05 00:53:52.512323 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-05 00:53:52.512356 | orchestrator | 2026-03-05 00:53:52.512365 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-05 00:53:52.512374 | orchestrator | Thursday 05 March 2026 00:53:24 +0000 (0:00:48.153) 0:04:24.210 ******** 2026-03-05 00:53:52.512393 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 00:53:52.512402 | orchestrator | 2026-03-05 00:53:52.512412 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-05 00:53:52.512421 | orchestrator | Thursday 05 March 2026 00:53:26 +0000 (0:00:01.937) 0:04:26.147 ******** 2026-03-05 00:53:52.512430 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 00:53:52.512438 | orchestrator | 2026-03-05 00:53:52.512446 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-05 00:53:52.512454 | orchestrator | Thursday 05 March 2026 00:53:28 +0000 (0:00:01.759) 0:04:27.907 ******** 2026-03-05 00:53:52.512462 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 00:53:52.512470 | orchestrator | 2026-03-05 00:53:52.512478 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-05 00:53:52.512486 | orchestrator | Thursday 05 March 2026 00:53:29 +0000 (0:00:01.102) 0:04:29.009 ******** 2026-03-05 00:53:52.512494 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.512510 | orchestrator | 2026-03-05 00:53:52.512518 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-05 00:53:52.512526 | orchestrator | Thursday 05 March 2026 00:53:29 +0000 (0:00:00.144) 0:04:29.154 ******** 2026-03-05 00:53:52.512534 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-05 00:53:52.512542 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-05 00:53:52.512550 | orchestrator | 2026-03-05 00:53:52.512558 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-05 00:53:52.512566 | orchestrator | Thursday 05 March 2026 00:53:31 +0000 (0:00:01.822) 0:04:30.976 ******** 2026-03-05 00:53:52.512574 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.512582 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.512590 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.512597 | orchestrator | 2026-03-05 00:53:52.512605 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-05 00:53:52.512613 | orchestrator | Thursday 05 March 2026 00:53:31 +0000 (0:00:00.284) 0:04:31.261 ******** 2026-03-05 00:53:52.512621 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.512629 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.512637 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.512645 | orchestrator | 2026-03-05 00:53:52.512652 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-05 00:53:52.512660 | orchestrator | 2026-03-05 00:53:52.512668 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-05 00:53:52.512676 | orchestrator | Thursday 05 March 2026 00:53:32 +0000 (0:00:01.123) 0:04:32.384 ******** 2026-03-05 00:53:52.512684 | orchestrator | ok: [testbed-manager] 2026-03-05 00:53:52.512692 | orchestrator | 2026-03-05 00:53:52.512700 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-05 00:53:52.512707 | orchestrator | Thursday 05 March 2026 00:53:33 +0000 (0:00:00.153) 0:04:32.537 ******** 2026-03-05 00:53:52.512715 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-05 00:53:52.512723 | orchestrator | 2026-03-05 00:53:52.512731 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-05 00:53:52.512739 | orchestrator | Thursday 05 March 2026 00:53:33 +0000 (0:00:00.209) 0:04:32.747 ******** 2026-03-05 00:53:52.512747 | orchestrator | changed: [testbed-manager] 2026-03-05 00:53:52.512755 | orchestrator | 2026-03-05 00:53:52.512763 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-05 00:53:52.512771 | orchestrator | 2026-03-05 00:53:52.512779 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-05 00:53:52.512787 | orchestrator | Thursday 05 March 2026 00:53:38 +0000 (0:00:04.903) 0:04:37.651 ******** 2026-03-05 00:53:52.512795 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:53:52.512803 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:53:52.512811 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:53:52.512818 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:53:52.512827 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:53:52.512835 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:53:52.512842 | orchestrator | 2026-03-05 00:53:52.512850 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-05 00:53:52.512858 | orchestrator | Thursday 05 March 2026 00:53:38 +0000 (0:00:00.643) 0:04:38.295 ******** 2026-03-05 00:53:52.512884 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-05 00:53:52.512892 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-05 00:53:52.512900 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-05 00:53:52.512908 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-05 00:53:52.512916 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-05 00:53:52.512929 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-05 00:53:52.512942 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-05 00:53:52.512950 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-05 00:53:52.512958 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-05 00:53:52.512966 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-05 00:53:52.512974 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-05 00:53:52.512982 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-05 00:53:52.512996 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-05 00:53:52.513004 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-05 00:53:52.513012 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-05 00:53:52.513021 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-05 00:53:52.513028 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-05 00:53:52.513036 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-05 00:53:52.513044 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-05 00:53:52.513061 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-05 00:53:52.513069 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-05 00:53:52.513085 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-05 00:53:52.513093 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-05 00:53:52.513102 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-05 00:53:52.513110 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-05 00:53:52.513118 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-05 00:53:52.513126 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-05 00:53:52.513134 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-05 00:53:52.513142 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-05 00:53:52.513150 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-05 00:53:52.513158 | orchestrator | 2026-03-05 00:53:52.513166 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-05 00:53:52.513174 | orchestrator | Thursday 05 March 2026 00:53:49 +0000 (0:00:10.664) 0:04:48.959 ******** 2026-03-05 00:53:52.513182 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.513190 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.513198 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.513206 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.513214 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.513222 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.513230 | orchestrator | 2026-03-05 00:53:52.513238 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-05 00:53:52.513246 | orchestrator | Thursday 05 March 2026 00:53:50 +0000 (0:00:00.680) 0:04:49.640 ******** 2026-03-05 00:53:52.513254 | orchestrator | skipping: [testbed-node-3] 2026-03-05 00:53:52.513262 | orchestrator | skipping: [testbed-node-4] 2026-03-05 00:53:52.513276 | orchestrator | skipping: [testbed-node-5] 2026-03-05 00:53:52.513284 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:53:52.513292 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:53:52.513300 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:53:52.513308 | orchestrator | 2026-03-05 00:53:52.513316 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:53:52.513324 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:53:52.513334 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-05 00:53:52.513342 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-05 00:53:52.513351 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-05 00:53:52.513359 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-05 00:53:52.513367 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-05 00:53:52.513375 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-05 00:53:52.513383 | orchestrator | 2026-03-05 00:53:52.513391 | orchestrator | 2026-03-05 00:53:52.513403 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:53:52.513411 | orchestrator | Thursday 05 March 2026 00:53:50 +0000 (0:00:00.425) 0:04:50.066 ******** 2026-03-05 00:53:52.513419 | orchestrator | =============================================================================== 2026-03-05 00:53:52.513427 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 48.15s 2026-03-05 00:53:52.513435 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.73s 2026-03-05 00:53:52.513443 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.97s 2026-03-05 00:53:52.513456 | orchestrator | kubectl : Install required packages ------------------------------------ 17.15s 2026-03-05 00:53:52.513464 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.85s 2026-03-05 00:53:52.513473 | orchestrator | Manage labels ---------------------------------------------------------- 10.66s 2026-03-05 00:53:52.513481 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.94s 2026-03-05 00:53:52.513489 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 6.31s 2026-03-05 00:53:52.513497 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.08s 2026-03-05 00:53:52.513505 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.48s 2026-03-05 00:53:52.513513 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.90s 2026-03-05 00:53:52.513522 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.53s 2026-03-05 00:53:52.513530 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.31s 2026-03-05 00:53:52.513538 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.03s 2026-03-05 00:53:52.513546 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.81s 2026-03-05 00:53:52.513554 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.76s 2026-03-05 00:53:52.513562 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.76s 2026-03-05 00:53:52.513570 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.66s 2026-03-05 00:53:52.513583 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.23s 2026-03-05 00:53:52.513591 | orchestrator | kubectl : Add repository gpg key ---------------------------------------- 2.00s 2026-03-05 00:53:52.513599 | orchestrator | 2026-03-05 00:53:52 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:52.513607 | orchestrator | 2026-03-05 00:53:52 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:52.513615 | orchestrator | 2026-03-05 00:53:52 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:52.513623 | orchestrator | 2026-03-05 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:55.537593 | orchestrator | 2026-03-05 00:53:55 | INFO  | Task de27b448-b914-4251-b55a-905ba5c1329a is in state STARTED 2026-03-05 00:53:55.537687 | orchestrator | 2026-03-05 00:53:55 | INFO  | Task 6fe43a46-e4e3-477a-8c17-1c30e1f643dd is in state STARTED 2026-03-05 00:53:55.538976 | orchestrator | 2026-03-05 00:53:55 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:55.539180 | orchestrator | 2026-03-05 00:53:55 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:55.539942 | orchestrator | 2026-03-05 00:53:55 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:55.540482 | orchestrator | 2026-03-05 00:53:55 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:55.540513 | orchestrator | 2026-03-05 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:53:58.588445 | orchestrator | 2026-03-05 00:53:58 | INFO  | Task de27b448-b914-4251-b55a-905ba5c1329a is in state STARTED 2026-03-05 00:53:58.588543 | orchestrator | 2026-03-05 00:53:58 | INFO  | Task 6fe43a46-e4e3-477a-8c17-1c30e1f643dd is in state STARTED 2026-03-05 00:53:58.588556 | orchestrator | 2026-03-05 00:53:58 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:53:58.588568 | orchestrator | 2026-03-05 00:53:58 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:53:58.588578 | orchestrator | 2026-03-05 00:53:58 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:53:58.607405 | orchestrator | 2026-03-05 00:53:58 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:53:58.607494 | orchestrator | 2026-03-05 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:01.602002 | orchestrator | 2026-03-05 00:54:01 | INFO  | Task de27b448-b914-4251-b55a-905ba5c1329a is in state STARTED 2026-03-05 00:54:01.602198 | orchestrator | 2026-03-05 00:54:01 | INFO  | Task 6fe43a46-e4e3-477a-8c17-1c30e1f643dd is in state SUCCESS 2026-03-05 00:54:01.603057 | orchestrator | 2026-03-05 00:54:01 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:01.603348 | orchestrator | 2026-03-05 00:54:01 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:01.604443 | orchestrator | 2026-03-05 00:54:01 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:01.606688 | orchestrator | 2026-03-05 00:54:01 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:54:01.606790 | orchestrator | 2026-03-05 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:04.641524 | orchestrator | 2026-03-05 00:54:04 | INFO  | Task de27b448-b914-4251-b55a-905ba5c1329a is in state SUCCESS 2026-03-05 00:54:04.642645 | orchestrator | 2026-03-05 00:54:04 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:04.644063 | orchestrator | 2026-03-05 00:54:04 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:04.645662 | orchestrator | 2026-03-05 00:54:04 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:04.647400 | orchestrator | 2026-03-05 00:54:04 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:54:04.647957 | orchestrator | 2026-03-05 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:07.682534 | orchestrator | 2026-03-05 00:54:07 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:07.684035 | orchestrator | 2026-03-05 00:54:07 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:07.686361 | orchestrator | 2026-03-05 00:54:07 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:07.687438 | orchestrator | 2026-03-05 00:54:07 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:54:07.687664 | orchestrator | 2026-03-05 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:10.728486 | orchestrator | 2026-03-05 00:54:10 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:10.730433 | orchestrator | 2026-03-05 00:54:10 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:10.732212 | orchestrator | 2026-03-05 00:54:10 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:10.733846 | orchestrator | 2026-03-05 00:54:10 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:54:10.733885 | orchestrator | 2026-03-05 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:13.774341 | orchestrator | 2026-03-05 00:54:13 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:13.774418 | orchestrator | 2026-03-05 00:54:13 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:13.774424 | orchestrator | 2026-03-05 00:54:13 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:13.774790 | orchestrator | 2026-03-05 00:54:13 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:54:13.774800 | orchestrator | 2026-03-05 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:16.816444 | orchestrator | 2026-03-05 00:54:16 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:16.816996 | orchestrator | 2026-03-05 00:54:16 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:16.817693 | orchestrator | 2026-03-05 00:54:16 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:16.818666 | orchestrator | 2026-03-05 00:54:16 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:54:16.818709 | orchestrator | 2026-03-05 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:19.866245 | orchestrator | 2026-03-05 00:54:19 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:19.866459 | orchestrator | 2026-03-05 00:54:19 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:19.868234 | orchestrator | 2026-03-05 00:54:19 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:19.871058 | orchestrator | 2026-03-05 00:54:19 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:54:19.871300 | orchestrator | 2026-03-05 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:22.912362 | orchestrator | 2026-03-05 00:54:22 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:22.912896 | orchestrator | 2026-03-05 00:54:22 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:22.913649 | orchestrator | 2026-03-05 00:54:22 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:22.914775 | orchestrator | 2026-03-05 00:54:22 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state STARTED 2026-03-05 00:54:22.914862 | orchestrator | 2026-03-05 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:25.973916 | orchestrator | 2026-03-05 00:54:25 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:25.975882 | orchestrator | 2026-03-05 00:54:25 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:25.979953 | orchestrator | 2026-03-05 00:54:25 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:25.983219 | orchestrator | 2026-03-05 00:54:25 | INFO  | Task 07cd819b-1a23-4d3f-a2ce-8377e7e0e10b is in state SUCCESS 2026-03-05 00:54:25.984080 | orchestrator | 2026-03-05 00:54:25.984168 | orchestrator | 2026-03-05 00:54:25.984284 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-05 00:54:25.984398 | orchestrator | 2026-03-05 00:54:25.984413 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-05 00:54:25.984457 | orchestrator | Thursday 05 March 2026 00:53:55 +0000 (0:00:00.141) 0:00:00.141 ******** 2026-03-05 00:54:25.984471 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-05 00:54:25.984482 | orchestrator | 2026-03-05 00:54:25.984492 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-05 00:54:25.984502 | orchestrator | Thursday 05 March 2026 00:53:56 +0000 (0:00:00.708) 0:00:00.849 ******** 2026-03-05 00:54:25.984512 | orchestrator | changed: [testbed-manager] 2026-03-05 00:54:25.984522 | orchestrator | 2026-03-05 00:54:25.984532 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-05 00:54:25.984546 | orchestrator | Thursday 05 March 2026 00:53:57 +0000 (0:00:01.802) 0:00:02.651 ******** 2026-03-05 00:54:25.984562 | orchestrator | changed: [testbed-manager] 2026-03-05 00:54:25.984584 | orchestrator | 2026-03-05 00:54:25.984646 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:54:25.984663 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:54:25.984681 | orchestrator | 2026-03-05 00:54:25.984697 | orchestrator | 2026-03-05 00:54:25.984714 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:54:25.984730 | orchestrator | Thursday 05 March 2026 00:53:58 +0000 (0:00:00.818) 0:00:03.470 ******** 2026-03-05 00:54:25.984747 | orchestrator | =============================================================================== 2026-03-05 00:54:25.984872 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.80s 2026-03-05 00:54:25.984909 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.82s 2026-03-05 00:54:25.984955 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2026-03-05 00:54:25.984986 | orchestrator | 2026-03-05 00:54:25.985005 | orchestrator | 2026-03-05 00:54:25.985016 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-05 00:54:25.985026 | orchestrator | 2026-03-05 00:54:25.985036 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-05 00:54:25.985046 | orchestrator | Thursday 05 March 2026 00:53:55 +0000 (0:00:00.140) 0:00:00.140 ******** 2026-03-05 00:54:25.985078 | orchestrator | ok: [testbed-manager] 2026-03-05 00:54:25.985089 | orchestrator | 2026-03-05 00:54:25.985099 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-05 00:54:25.985109 | orchestrator | Thursday 05 March 2026 00:53:55 +0000 (0:00:00.590) 0:00:00.730 ******** 2026-03-05 00:54:25.985118 | orchestrator | ok: [testbed-manager] 2026-03-05 00:54:25.985128 | orchestrator | 2026-03-05 00:54:25.985138 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-05 00:54:25.985148 | orchestrator | Thursday 05 March 2026 00:53:56 +0000 (0:00:00.582) 0:00:01.313 ******** 2026-03-05 00:54:25.985158 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-05 00:54:25.985169 | orchestrator | 2026-03-05 00:54:25.985179 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-05 00:54:25.985189 | orchestrator | Thursday 05 March 2026 00:53:57 +0000 (0:00:00.746) 0:00:02.059 ******** 2026-03-05 00:54:25.985198 | orchestrator | changed: [testbed-manager] 2026-03-05 00:54:25.985208 | orchestrator | 2026-03-05 00:54:25.985218 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-05 00:54:25.985229 | orchestrator | Thursday 05 March 2026 00:53:58 +0000 (0:00:01.748) 0:00:03.808 ******** 2026-03-05 00:54:25.985238 | orchestrator | changed: [testbed-manager] 2026-03-05 00:54:25.985248 | orchestrator | 2026-03-05 00:54:25.985258 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-05 00:54:25.985268 | orchestrator | Thursday 05 March 2026 00:53:59 +0000 (0:00:00.577) 0:00:04.385 ******** 2026-03-05 00:54:25.985278 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 00:54:25.985288 | orchestrator | 2026-03-05 00:54:25.985303 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-05 00:54:25.985313 | orchestrator | Thursday 05 March 2026 00:54:01 +0000 (0:00:01.800) 0:00:06.186 ******** 2026-03-05 00:54:25.985323 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 00:54:25.985334 | orchestrator | 2026-03-05 00:54:25.985343 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-05 00:54:25.985353 | orchestrator | Thursday 05 March 2026 00:54:02 +0000 (0:00:00.855) 0:00:07.041 ******** 2026-03-05 00:54:25.985363 | orchestrator | ok: [testbed-manager] 2026-03-05 00:54:25.985373 | orchestrator | 2026-03-05 00:54:25.985382 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-05 00:54:25.985392 | orchestrator | Thursday 05 March 2026 00:54:02 +0000 (0:00:00.428) 0:00:07.470 ******** 2026-03-05 00:54:25.985402 | orchestrator | ok: [testbed-manager] 2026-03-05 00:54:25.985412 | orchestrator | 2026-03-05 00:54:25.985422 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:54:25.985432 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:54:25.985442 | orchestrator | 2026-03-05 00:54:25.985452 | orchestrator | 2026-03-05 00:54:25.985462 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:54:25.985471 | orchestrator | Thursday 05 March 2026 00:54:02 +0000 (0:00:00.312) 0:00:07.783 ******** 2026-03-05 00:54:25.985481 | orchestrator | =============================================================================== 2026-03-05 00:54:25.985491 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.80s 2026-03-05 00:54:25.985500 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.75s 2026-03-05 00:54:25.985511 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.86s 2026-03-05 00:54:25.985542 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2026-03-05 00:54:25.985553 | orchestrator | Get home directory of operator user ------------------------------------- 0.59s 2026-03-05 00:54:25.985563 | orchestrator | Create .kube directory -------------------------------------------------- 0.58s 2026-03-05 00:54:25.985573 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.58s 2026-03-05 00:54:25.985590 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.43s 2026-03-05 00:54:25.985599 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2026-03-05 00:54:25.985609 | orchestrator | 2026-03-05 00:54:25.985998 | orchestrator | 2026-03-05 00:54:25.986094 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-05 00:54:25.986106 | orchestrator | 2026-03-05 00:54:25.986116 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-05 00:54:25.986126 | orchestrator | Thursday 05 March 2026 00:51:59 +0000 (0:00:00.223) 0:00:00.223 ******** 2026-03-05 00:54:25.986136 | orchestrator | ok: [localhost] => { 2026-03-05 00:54:25.986149 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-05 00:54:25.986160 | orchestrator | } 2026-03-05 00:54:25.986170 | orchestrator | 2026-03-05 00:54:25.986180 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-05 00:54:25.986190 | orchestrator | Thursday 05 March 2026 00:51:59 +0000 (0:00:00.086) 0:00:00.309 ******** 2026-03-05 00:54:25.986201 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-05 00:54:25.986213 | orchestrator | ...ignoring 2026-03-05 00:54:25.986223 | orchestrator | 2026-03-05 00:54:25.986233 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-05 00:54:25.986243 | orchestrator | Thursday 05 March 2026 00:52:03 +0000 (0:00:03.230) 0:00:03.540 ******** 2026-03-05 00:54:25.986253 | orchestrator | skipping: [localhost] 2026-03-05 00:54:25.986263 | orchestrator | 2026-03-05 00:54:25.986272 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-05 00:54:25.986283 | orchestrator | Thursday 05 March 2026 00:52:03 +0000 (0:00:00.145) 0:00:03.685 ******** 2026-03-05 00:54:25.986293 | orchestrator | ok: [localhost] 2026-03-05 00:54:25.986304 | orchestrator | 2026-03-05 00:54:25.986319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:54:25.986335 | orchestrator | 2026-03-05 00:54:25.986351 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:54:25.986367 | orchestrator | Thursday 05 March 2026 00:52:03 +0000 (0:00:00.627) 0:00:04.313 ******** 2026-03-05 00:54:25.986383 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:54:25.986400 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:54:25.986411 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:54:25.986421 | orchestrator | 2026-03-05 00:54:25.986431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:54:25.986441 | orchestrator | Thursday 05 March 2026 00:52:04 +0000 (0:00:00.725) 0:00:05.038 ******** 2026-03-05 00:54:25.986450 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-05 00:54:25.986461 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-05 00:54:25.986470 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-05 00:54:25.986480 | orchestrator | 2026-03-05 00:54:25.986490 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-05 00:54:25.986500 | orchestrator | 2026-03-05 00:54:25.986510 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-05 00:54:25.986519 | orchestrator | Thursday 05 March 2026 00:52:05 +0000 (0:00:00.830) 0:00:05.869 ******** 2026-03-05 00:54:25.986530 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:54:25.986540 | orchestrator | 2026-03-05 00:54:25.986550 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-05 00:54:25.986568 | orchestrator | Thursday 05 March 2026 00:52:06 +0000 (0:00:00.577) 0:00:06.447 ******** 2026-03-05 00:54:25.986580 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:54:25.986592 | orchestrator | 2026-03-05 00:54:25.986603 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-05 00:54:25.986627 | orchestrator | Thursday 05 March 2026 00:52:07 +0000 (0:00:01.012) 0:00:07.459 ******** 2026-03-05 00:54:25.986640 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:54:25.986651 | orchestrator | 2026-03-05 00:54:25.986662 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-05 00:54:25.986673 | orchestrator | Thursday 05 March 2026 00:52:07 +0000 (0:00:00.331) 0:00:07.790 ******** 2026-03-05 00:54:25.986685 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:54:25.986696 | orchestrator | 2026-03-05 00:54:25.986708 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-05 00:54:25.986719 | orchestrator | Thursday 05 March 2026 00:52:07 +0000 (0:00:00.314) 0:00:08.105 ******** 2026-03-05 00:54:25.986730 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:54:25.986741 | orchestrator | 2026-03-05 00:54:25.986753 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-05 00:54:25.986764 | orchestrator | Thursday 05 March 2026 00:52:08 +0000 (0:00:00.366) 0:00:08.472 ******** 2026-03-05 00:54:25.986776 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:54:25.986788 | orchestrator | 2026-03-05 00:54:25.986799 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-05 00:54:25.986811 | orchestrator | Thursday 05 March 2026 00:52:09 +0000 (0:00:00.902) 0:00:09.375 ******** 2026-03-05 00:54:25.986822 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:54:25.986833 | orchestrator | 2026-03-05 00:54:25.986845 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-05 00:54:25.986856 | orchestrator | Thursday 05 March 2026 00:52:09 +0000 (0:00:00.871) 0:00:10.247 ******** 2026-03-05 00:54:25.986868 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:54:25.986880 | orchestrator | 2026-03-05 00:54:25.986891 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-05 00:54:25.986903 | orchestrator | Thursday 05 March 2026 00:52:10 +0000 (0:00:00.980) 0:00:11.228 ******** 2026-03-05 00:54:25.986914 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:54:25.987013 | orchestrator | 2026-03-05 00:54:25.987033 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-05 00:54:25.987043 | orchestrator | Thursday 05 March 2026 00:52:11 +0000 (0:00:00.570) 0:00:11.798 ******** 2026-03-05 00:54:25.987053 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:54:25.987063 | orchestrator | 2026-03-05 00:54:25.987094 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-05 00:54:25.987104 | orchestrator | Thursday 05 March 2026 00:52:12 +0000 (0:00:00.831) 0:00:12.630 ******** 2026-03-05 00:54:25.987121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:54:25.987138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:54:25.987165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:54:25.987177 | orchestrator | 2026-03-05 00:54:25.987187 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-05 00:54:25.987197 | orchestrator | Thursday 05 March 2026 00:52:14 +0000 (0:00:01.820) 0:00:14.451 ******** 2026-03-05 00:54:25.987215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:54:25.987227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:54:25.987249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:54:25.987260 | orchestrator | 2026-03-05 00:54:25.987271 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-05 00:54:25.987281 | orchestrator | Thursday 05 March 2026 00:52:16 +0000 (0:00:02.138) 0:00:16.589 ******** 2026-03-05 00:54:25.987291 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-05 00:54:25.987301 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-05 00:54:25.987311 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-05 00:54:25.987321 | orchestrator | 2026-03-05 00:54:25.987331 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-05 00:54:25.987341 | orchestrator | Thursday 05 March 2026 00:52:18 +0000 (0:00:02.753) 0:00:19.342 ******** 2026-03-05 00:54:25.987350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-05 00:54:25.987360 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-05 00:54:25.987370 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-05 00:54:25.987380 | orchestrator | 2026-03-05 00:54:25.987389 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-05 00:54:25.987399 | orchestrator | Thursday 05 March 2026 00:52:21 +0000 (0:00:02.224) 0:00:21.567 ******** 2026-03-05 00:54:25.987409 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-05 00:54:25.987419 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-05 00:54:25.987428 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-05 00:54:25.987439 | orchestrator | 2026-03-05 00:54:25.987454 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-05 00:54:25.987464 | orchestrator | Thursday 05 March 2026 00:52:23 +0000 (0:00:02.274) 0:00:23.841 ******** 2026-03-05 00:54:25.987474 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-05 00:54:25.987484 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-05 00:54:25.987494 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-05 00:54:25.987505 | orchestrator | 2026-03-05 00:54:25.987514 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-05 00:54:25.987528 | orchestrator | Thursday 05 March 2026 00:52:25 +0000 (0:00:02.407) 0:00:26.249 ******** 2026-03-05 00:54:25.987536 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-05 00:54:25.987545 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-05 00:54:25.987553 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-05 00:54:25.987561 | orchestrator | 2026-03-05 00:54:25.987569 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-05 00:54:25.987577 | orchestrator | Thursday 05 March 2026 00:52:28 +0000 (0:00:02.305) 0:00:28.555 ******** 2026-03-05 00:54:25.987585 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-05 00:54:25.987593 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-05 00:54:25.987601 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-05 00:54:25.987609 | orchestrator | 2026-03-05 00:54:25.987617 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-05 00:54:25.987626 | orchestrator | Thursday 05 March 2026 00:52:30 +0000 (0:00:02.258) 0:00:30.814 ******** 2026-03-05 00:54:25.987634 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:54:25.987642 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:54:25.987650 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:54:25.987658 | orchestrator | 2026-03-05 00:54:25.987666 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-05 00:54:25.987674 | orchestrator | Thursday 05 March 2026 00:52:31 +0000 (0:00:01.163) 0:00:31.978 ******** 2026-03-05 00:54:25.987687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:54:25.987697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:54:25.987712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:54:25.987728 | orchestrator | 2026-03-05 00:54:25.987736 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-05 00:54:25.987744 | orchestrator | Thursday 05 March 2026 00:52:33 +0000 (0:00:02.281) 0:00:34.259 ******** 2026-03-05 00:54:25.987752 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:54:25.987761 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:54:25.987769 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:54:25.987777 | orchestrator | 2026-03-05 00:54:25.987785 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-05 00:54:25.987793 | orchestrator | Thursday 05 March 2026 00:52:34 +0000 (0:00:00.980) 0:00:35.240 ******** 2026-03-05 00:54:25.987801 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:54:25.987810 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:54:25.987819 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:54:25.987827 | orchestrator | 2026-03-05 00:54:25.987835 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-05 00:54:25.987844 | orchestrator | Thursday 05 March 2026 00:52:41 +0000 (0:00:07.053) 0:00:42.294 ******** 2026-03-05 00:54:25.987852 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:54:25.987860 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:54:25.987868 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:54:25.987876 | orchestrator | 2026-03-05 00:54:25.987884 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-05 00:54:25.987892 | orchestrator | 2026-03-05 00:54:25.987900 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-05 00:54:25.987908 | orchestrator | Thursday 05 March 2026 00:52:42 +0000 (0:00:00.406) 0:00:42.700 ******** 2026-03-05 00:54:25.987916 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:54:25.987945 | orchestrator | 2026-03-05 00:54:25.987953 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-05 00:54:25.987961 | orchestrator | Thursday 05 March 2026 00:52:43 +0000 (0:00:00.778) 0:00:43.478 ******** 2026-03-05 00:54:25.987970 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:54:25.987977 | orchestrator | 2026-03-05 00:54:25.987985 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-05 00:54:25.987993 | orchestrator | Thursday 05 March 2026 00:52:43 +0000 (0:00:00.198) 0:00:43.677 ******** 2026-03-05 00:54:25.988006 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:54:25.988014 | orchestrator | 2026-03-05 00:54:25.988022 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-05 00:54:25.988031 | orchestrator | Thursday 05 March 2026 00:52:45 +0000 (0:00:02.496) 0:00:46.173 ******** 2026-03-05 00:54:25.988039 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:54:25.988047 | orchestrator | 2026-03-05 00:54:25.988055 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-05 00:54:25.988063 | orchestrator | 2026-03-05 00:54:25.988071 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-05 00:54:25.988091 | orchestrator | Thursday 05 March 2026 00:53:42 +0000 (0:00:56.467) 0:01:42.641 ******** 2026-03-05 00:54:25.988099 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:54:25.988107 | orchestrator | 2026-03-05 00:54:25.988115 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-05 00:54:25.988123 | orchestrator | Thursday 05 March 2026 00:53:42 +0000 (0:00:00.634) 0:01:43.275 ******** 2026-03-05 00:54:25.988131 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:54:25.988139 | orchestrator | 2026-03-05 00:54:25.988147 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-05 00:54:25.988155 | orchestrator | Thursday 05 March 2026 00:53:43 +0000 (0:00:00.406) 0:01:43.682 ******** 2026-03-05 00:54:25.988163 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:54:25.988171 | orchestrator | 2026-03-05 00:54:25.988179 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-05 00:54:25.988187 | orchestrator | Thursday 05 March 2026 00:53:50 +0000 (0:00:06.699) 0:01:50.381 ******** 2026-03-05 00:54:25.988195 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:54:25.988203 | orchestrator | 2026-03-05 00:54:25.988211 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-05 00:54:25.988219 | orchestrator | 2026-03-05 00:54:25.988227 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-05 00:54:25.988235 | orchestrator | Thursday 05 March 2026 00:54:01 +0000 (0:00:11.850) 0:02:02.231 ******** 2026-03-05 00:54:25.988243 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:54:25.988251 | orchestrator | 2026-03-05 00:54:25.988261 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-05 00:54:25.988275 | orchestrator | Thursday 05 March 2026 00:54:02 +0000 (0:00:00.637) 0:02:02.869 ******** 2026-03-05 00:54:25.988294 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:54:25.988308 | orchestrator | 2026-03-05 00:54:25.988320 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-05 00:54:25.988343 | orchestrator | Thursday 05 March 2026 00:54:02 +0000 (0:00:00.240) 0:02:03.109 ******** 2026-03-05 00:54:25.988357 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:54:25.988369 | orchestrator | 2026-03-05 00:54:25.988381 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-05 00:54:25.988393 | orchestrator | Thursday 05 March 2026 00:54:04 +0000 (0:00:01.853) 0:02:04.963 ******** 2026-03-05 00:54:25.988404 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:54:25.988417 | orchestrator | 2026-03-05 00:54:25.988430 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-05 00:54:25.988443 | orchestrator | 2026-03-05 00:54:25.988456 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-05 00:54:25.988470 | orchestrator | Thursday 05 March 2026 00:54:20 +0000 (0:00:16.322) 0:02:21.285 ******** 2026-03-05 00:54:25.988483 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:54:25.988495 | orchestrator | 2026-03-05 00:54:25.988508 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-05 00:54:25.988522 | orchestrator | Thursday 05 March 2026 00:54:21 +0000 (0:00:00.577) 0:02:21.862 ******** 2026-03-05 00:54:25.988534 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-05 00:54:25.988547 | orchestrator | enable_outward_rabbitmq_True 2026-03-05 00:54:25.988578 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-05 00:54:25.988603 | orchestrator | outward_rabbitmq_restart 2026-03-05 00:54:25.988616 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:54:25.988628 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:54:25.988641 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:54:25.988654 | orchestrator | 2026-03-05 00:54:25.988667 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-05 00:54:25.988681 | orchestrator | skipping: no hosts matched 2026-03-05 00:54:25.988694 | orchestrator | 2026-03-05 00:54:25.988708 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-05 00:54:25.988734 | orchestrator | skipping: no hosts matched 2026-03-05 00:54:25.988742 | orchestrator | 2026-03-05 00:54:25.988750 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-05 00:54:25.988758 | orchestrator | skipping: no hosts matched 2026-03-05 00:54:25.988766 | orchestrator | 2026-03-05 00:54:25.988774 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:54:25.988783 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-05 00:54:25.988792 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-05 00:54:25.988800 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:54:25.988808 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 00:54:25.988816 | orchestrator | 2026-03-05 00:54:25.988825 | orchestrator | 2026-03-05 00:54:25.988833 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:54:25.988841 | orchestrator | Thursday 05 March 2026 00:54:24 +0000 (0:00:02.686) 0:02:24.549 ******** 2026-03-05 00:54:25.988855 | orchestrator | =============================================================================== 2026-03-05 00:54:25.988864 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.64s 2026-03-05 00:54:25.988872 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.05s 2026-03-05 00:54:25.988880 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.05s 2026-03-05 00:54:25.988888 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.23s 2026-03-05 00:54:25.988896 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.75s 2026-03-05 00:54:25.988904 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.69s 2026-03-05 00:54:25.988912 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.41s 2026-03-05 00:54:25.988944 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.31s 2026-03-05 00:54:25.988958 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.28s 2026-03-05 00:54:25.988966 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.27s 2026-03-05 00:54:25.988974 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.26s 2026-03-05 00:54:25.988982 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.22s 2026-03-05 00:54:25.988990 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.14s 2026-03-05 00:54:25.988999 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.05s 2026-03-05 00:54:25.989007 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.82s 2026-03-05 00:54:25.989015 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.16s 2026-03-05 00:54:25.989023 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2026-03-05 00:54:25.989031 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.98s 2026-03-05 00:54:25.989039 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.98s 2026-03-05 00:54:25.989048 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.90s 2026-03-05 00:54:25.989056 | orchestrator | 2026-03-05 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:29.029325 | orchestrator | 2026-03-05 00:54:29 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:29.030694 | orchestrator | 2026-03-05 00:54:29 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:29.032300 | orchestrator | 2026-03-05 00:54:29 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:29.032380 | orchestrator | 2026-03-05 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:32.119496 | orchestrator | 2026-03-05 00:54:32 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:32.122290 | orchestrator | 2026-03-05 00:54:32 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:32.125354 | orchestrator | 2026-03-05 00:54:32 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:32.125888 | orchestrator | 2026-03-05 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:35.161692 | orchestrator | 2026-03-05 00:54:35 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:35.163696 | orchestrator | 2026-03-05 00:54:35 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:35.166590 | orchestrator | 2026-03-05 00:54:35 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:35.166781 | orchestrator | 2026-03-05 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:38.208149 | orchestrator | 2026-03-05 00:54:38 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:38.208779 | orchestrator | 2026-03-05 00:54:38 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:38.210868 | orchestrator | 2026-03-05 00:54:38 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:38.211020 | orchestrator | 2026-03-05 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:41.256452 | orchestrator | 2026-03-05 00:54:41 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:41.259031 | orchestrator | 2026-03-05 00:54:41 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:41.261760 | orchestrator | 2026-03-05 00:54:41 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:41.262311 | orchestrator | 2026-03-05 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:44.292204 | orchestrator | 2026-03-05 00:54:44 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:44.293049 | orchestrator | 2026-03-05 00:54:44 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:44.293818 | orchestrator | 2026-03-05 00:54:44 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:44.293864 | orchestrator | 2026-03-05 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:47.329431 | orchestrator | 2026-03-05 00:54:47 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:47.333388 | orchestrator | 2026-03-05 00:54:47 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:47.333916 | orchestrator | 2026-03-05 00:54:47 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:47.334331 | orchestrator | 2026-03-05 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:50.383276 | orchestrator | 2026-03-05 00:54:50 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:50.387047 | orchestrator | 2026-03-05 00:54:50 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:50.390144 | orchestrator | 2026-03-05 00:54:50 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:50.390217 | orchestrator | 2026-03-05 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:53.434850 | orchestrator | 2026-03-05 00:54:53 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:53.435219 | orchestrator | 2026-03-05 00:54:53 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:53.436070 | orchestrator | 2026-03-05 00:54:53 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:53.436089 | orchestrator | 2026-03-05 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:56.471731 | orchestrator | 2026-03-05 00:54:56 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:56.474434 | orchestrator | 2026-03-05 00:54:56 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:56.477825 | orchestrator | 2026-03-05 00:54:56 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:56.477923 | orchestrator | 2026-03-05 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:54:59.522782 | orchestrator | 2026-03-05 00:54:59 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:54:59.527861 | orchestrator | 2026-03-05 00:54:59 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:54:59.531370 | orchestrator | 2026-03-05 00:54:59 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:54:59.531441 | orchestrator | 2026-03-05 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:02.572370 | orchestrator | 2026-03-05 00:55:02 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:02.572805 | orchestrator | 2026-03-05 00:55:02 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:02.575655 | orchestrator | 2026-03-05 00:55:02 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:55:02.577129 | orchestrator | 2026-03-05 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:05.611419 | orchestrator | 2026-03-05 00:55:05 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:05.617214 | orchestrator | 2026-03-05 00:55:05 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:05.619216 | orchestrator | 2026-03-05 00:55:05 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state STARTED 2026-03-05 00:55:05.619505 | orchestrator | 2026-03-05 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:08.659674 | orchestrator | 2026-03-05 00:55:08 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:08.659777 | orchestrator | 2026-03-05 00:55:08 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:08.660692 | orchestrator | 2026-03-05 00:55:08 | INFO  | Task 10c3665e-40f6-4b0e-85ab-2dac1ebae7cd is in state SUCCESS 2026-03-05 00:55:08.663150 | orchestrator | 2026-03-05 00:55:08.663293 | orchestrator | 2026-03-05 00:55:08.663309 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:55:08.663321 | orchestrator | 2026-03-05 00:55:08.663331 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:55:08.663359 | orchestrator | Thursday 05 March 2026 00:52:43 +0000 (0:00:00.148) 0:00:00.148 ******** 2026-03-05 00:55:08.663369 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:55:08.663381 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:55:08.663411 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:55:08.663422 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.663431 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.663441 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.663451 | orchestrator | 2026-03-05 00:55:08.663461 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:55:08.663471 | orchestrator | Thursday 05 March 2026 00:52:44 +0000 (0:00:00.749) 0:00:00.897 ******** 2026-03-05 00:55:08.663485 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-05 00:55:08.663502 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-05 00:55:08.663518 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-05 00:55:08.663534 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-05 00:55:08.663549 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-05 00:55:08.663564 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-05 00:55:08.663581 | orchestrator | 2026-03-05 00:55:08.663598 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-05 00:55:08.663614 | orchestrator | 2026-03-05 00:55:08.663631 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-05 00:55:08.663649 | orchestrator | Thursday 05 March 2026 00:52:45 +0000 (0:00:00.721) 0:00:01.619 ******** 2026-03-05 00:55:08.663667 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:08.663687 | orchestrator | 2026-03-05 00:55:08.663704 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-05 00:55:08.663721 | orchestrator | Thursday 05 March 2026 00:52:46 +0000 (0:00:01.448) 0:00:03.067 ******** 2026-03-05 00:55:08.663743 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663868 | orchestrator | 2026-03-05 00:55:08.663878 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-05 00:55:08.663896 | orchestrator | Thursday 05 March 2026 00:52:47 +0000 (0:00:01.201) 0:00:04.268 ******** 2026-03-05 00:55:08.663906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.663957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664066 | orchestrator | 2026-03-05 00:55:08.664091 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-05 00:55:08.664108 | orchestrator | Thursday 05 March 2026 00:52:49 +0000 (0:00:01.340) 0:00:05.609 ******** 2026-03-05 00:55:08.664126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664157 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664197 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664269 | orchestrator | 2026-03-05 00:55:08.664279 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-05 00:55:08.664289 | orchestrator | Thursday 05 March 2026 00:52:50 +0000 (0:00:01.118) 0:00:06.728 ******** 2026-03-05 00:55:08.664299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664382 | orchestrator | 2026-03-05 00:55:08.664392 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-05 00:55:08.664401 | orchestrator | Thursday 05 March 2026 00:52:52 +0000 (0:00:02.026) 0:00:08.754 ******** 2026-03-05 00:55:08.664411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664432 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.664478 | orchestrator | 2026-03-05 00:55:08.664488 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-05 00:55:08.664498 | orchestrator | Thursday 05 March 2026 00:52:53 +0000 (0:00:01.281) 0:00:10.035 ******** 2026-03-05 00:55:08.664508 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:55:08.664519 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:55:08.664529 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:08.664538 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:55:08.664548 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:08.664558 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:08.664567 | orchestrator | 2026-03-05 00:55:08.664577 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-05 00:55:08.664587 | orchestrator | Thursday 05 March 2026 00:52:56 +0000 (0:00:03.077) 0:00:13.113 ******** 2026-03-05 00:55:08.664596 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-05 00:55:08.664606 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-05 00:55:08.664616 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-05 00:55:08.664631 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-05 00:55:08.664641 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-05 00:55:08.664656 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-05 00:55:08.664666 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:55:08.664676 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:55:08.664686 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:55:08.664696 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:55:08.664705 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:55:08.664715 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-05 00:55:08.664725 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:55:08.664736 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:55:08.664745 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:55:08.664756 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:55:08.664765 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:55:08.664775 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-05 00:55:08.664792 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:55:08.664802 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:55:08.664812 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:55:08.664822 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:55:08.664831 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:55:08.664841 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-05 00:55:08.664851 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:55:08.664860 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:55:08.664870 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:55:08.664880 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:55:08.664890 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:55:08.664900 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-05 00:55:08.664909 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:55:08.664919 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:55:08.664929 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:55:08.664939 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:55:08.664949 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:55:08.664958 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-05 00:55:08.664998 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-05 00:55:08.665016 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-05 00:55:08.665031 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-05 00:55:08.665048 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-05 00:55:08.665070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-05 00:55:08.665087 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-05 00:55:08.665104 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-05 00:55:08.665123 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-05 00:55:08.665139 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-05 00:55:08.665154 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-05 00:55:08.665164 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-05 00:55:08.665183 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-05 00:55:08.665193 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-05 00:55:08.665203 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-05 00:55:08.665213 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-05 00:55:08.665222 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-05 00:55:08.665233 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-05 00:55:08.665242 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-05 00:55:08.665252 | orchestrator | 2026-03-05 00:55:08.665262 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:55:08.665272 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:21.650) 0:00:34.763 ******** 2026-03-05 00:55:08.665282 | orchestrator | 2026-03-05 00:55:08.665292 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:55:08.665302 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:00.074) 0:00:34.838 ******** 2026-03-05 00:55:08.665312 | orchestrator | 2026-03-05 00:55:08.665322 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:55:08.665332 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:00.071) 0:00:34.910 ******** 2026-03-05 00:55:08.665341 | orchestrator | 2026-03-05 00:55:08.665351 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:55:08.665363 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:00.062) 0:00:34.973 ******** 2026-03-05 00:55:08.665379 | orchestrator | 2026-03-05 00:55:08.665394 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:55:08.665410 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:00.063) 0:00:35.036 ******** 2026-03-05 00:55:08.665426 | orchestrator | 2026-03-05 00:55:08.665442 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-05 00:55:08.665459 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:00.059) 0:00:35.096 ******** 2026-03-05 00:55:08.665475 | orchestrator | 2026-03-05 00:55:08.665494 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-05 00:55:08.665505 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:00.061) 0:00:35.158 ******** 2026-03-05 00:55:08.665514 | orchestrator | ok: [testbed-node-4] 2026-03-05 00:55:08.665525 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.665534 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.665544 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.665554 | orchestrator | ok: [testbed-node-3] 2026-03-05 00:55:08.665563 | orchestrator | ok: [testbed-node-5] 2026-03-05 00:55:08.665573 | orchestrator | 2026-03-05 00:55:08.665582 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-05 00:55:08.665593 | orchestrator | Thursday 05 March 2026 00:53:20 +0000 (0:00:01.661) 0:00:36.820 ******** 2026-03-05 00:55:08.665602 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:08.665612 | orchestrator | changed: [testbed-node-4] 2026-03-05 00:55:08.665622 | orchestrator | changed: [testbed-node-5] 2026-03-05 00:55:08.665631 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:08.665641 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:08.665650 | orchestrator | changed: [testbed-node-3] 2026-03-05 00:55:08.665660 | orchestrator | 2026-03-05 00:55:08.665670 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-05 00:55:08.665680 | orchestrator | 2026-03-05 00:55:08.665724 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-05 00:55:08.665744 | orchestrator | Thursday 05 March 2026 00:53:47 +0000 (0:00:26.668) 0:01:03.488 ******** 2026-03-05 00:55:08.665754 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:08.665764 | orchestrator | 2026-03-05 00:55:08.665774 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-05 00:55:08.665783 | orchestrator | Thursday 05 March 2026 00:53:48 +0000 (0:00:01.110) 0:01:04.599 ******** 2026-03-05 00:55:08.665793 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:08.665803 | orchestrator | 2026-03-05 00:55:08.665821 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-05 00:55:08.665831 | orchestrator | Thursday 05 March 2026 00:53:48 +0000 (0:00:00.494) 0:01:05.093 ******** 2026-03-05 00:55:08.665852 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.665862 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.665872 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.665882 | orchestrator | 2026-03-05 00:55:08.665892 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-05 00:55:08.665901 | orchestrator | Thursday 05 March 2026 00:53:49 +0000 (0:00:00.995) 0:01:06.089 ******** 2026-03-05 00:55:08.665914 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.665930 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.665947 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.665963 | orchestrator | 2026-03-05 00:55:08.666007 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-05 00:55:08.666104 | orchestrator | Thursday 05 March 2026 00:53:50 +0000 (0:00:00.508) 0:01:06.597 ******** 2026-03-05 00:55:08.666125 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.666142 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.666159 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.666176 | orchestrator | 2026-03-05 00:55:08.666193 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-05 00:55:08.666209 | orchestrator | Thursday 05 March 2026 00:53:50 +0000 (0:00:00.593) 0:01:07.190 ******** 2026-03-05 00:55:08.666226 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.666242 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.666259 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.666276 | orchestrator | 2026-03-05 00:55:08.666293 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-05 00:55:08.666309 | orchestrator | Thursday 05 March 2026 00:53:51 +0000 (0:00:00.639) 0:01:07.830 ******** 2026-03-05 00:55:08.666325 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.666342 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.666360 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.666378 | orchestrator | 2026-03-05 00:55:08.666391 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-05 00:55:08.666401 | orchestrator | Thursday 05 March 2026 00:53:52 +0000 (0:00:01.096) 0:01:08.926 ******** 2026-03-05 00:55:08.666411 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.666421 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.666431 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.666440 | orchestrator | 2026-03-05 00:55:08.666450 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-05 00:55:08.666460 | orchestrator | Thursday 05 March 2026 00:53:53 +0000 (0:00:00.526) 0:01:09.453 ******** 2026-03-05 00:55:08.666470 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.666480 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.666489 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.666499 | orchestrator | 2026-03-05 00:55:08.666508 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-05 00:55:08.666518 | orchestrator | Thursday 05 March 2026 00:53:53 +0000 (0:00:00.272) 0:01:09.725 ******** 2026-03-05 00:55:08.666531 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.666559 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.666575 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.666591 | orchestrator | 2026-03-05 00:55:08.666607 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-05 00:55:08.666625 | orchestrator | Thursday 05 March 2026 00:53:53 +0000 (0:00:00.280) 0:01:10.005 ******** 2026-03-05 00:55:08.666637 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.666647 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.666657 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.666667 | orchestrator | 2026-03-05 00:55:08.666676 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-05 00:55:08.666686 | orchestrator | Thursday 05 March 2026 00:53:54 +0000 (0:00:00.823) 0:01:10.828 ******** 2026-03-05 00:55:08.666696 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.666707 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.666716 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.666726 | orchestrator | 2026-03-05 00:55:08.666736 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-05 00:55:08.666746 | orchestrator | Thursday 05 March 2026 00:53:55 +0000 (0:00:00.698) 0:01:11.527 ******** 2026-03-05 00:55:08.666756 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.666765 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.666775 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.666785 | orchestrator | 2026-03-05 00:55:08.666795 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-05 00:55:08.666804 | orchestrator | Thursday 05 March 2026 00:53:55 +0000 (0:00:00.659) 0:01:12.187 ******** 2026-03-05 00:55:08.666814 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.666824 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.666834 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.666843 | orchestrator | 2026-03-05 00:55:08.666853 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-05 00:55:08.666863 | orchestrator | Thursday 05 March 2026 00:53:56 +0000 (0:00:00.309) 0:01:12.496 ******** 2026-03-05 00:55:08.666873 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.666882 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.666892 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.666901 | orchestrator | 2026-03-05 00:55:08.666911 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-05 00:55:08.666921 | orchestrator | Thursday 05 March 2026 00:53:56 +0000 (0:00:00.745) 0:01:13.242 ******** 2026-03-05 00:55:08.666931 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.666941 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.666950 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.666960 | orchestrator | 2026-03-05 00:55:08.666997 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-05 00:55:08.667009 | orchestrator | Thursday 05 March 2026 00:53:57 +0000 (0:00:00.592) 0:01:13.834 ******** 2026-03-05 00:55:08.667019 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.667029 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.667039 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.667049 | orchestrator | 2026-03-05 00:55:08.667068 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-05 00:55:08.667079 | orchestrator | Thursday 05 March 2026 00:53:57 +0000 (0:00:00.292) 0:01:14.127 ******** 2026-03-05 00:55:08.667090 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.667107 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.667118 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.667127 | orchestrator | 2026-03-05 00:55:08.667137 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-05 00:55:08.667147 | orchestrator | Thursday 05 March 2026 00:53:58 +0000 (0:00:00.328) 0:01:14.456 ******** 2026-03-05 00:55:08.667157 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.667166 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.667184 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.667194 | orchestrator | 2026-03-05 00:55:08.667204 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-05 00:55:08.667213 | orchestrator | Thursday 05 March 2026 00:53:58 +0000 (0:00:00.385) 0:01:14.841 ******** 2026-03-05 00:55:08.667223 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:55:08.667233 | orchestrator | 2026-03-05 00:55:08.667243 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-05 00:55:08.667252 | orchestrator | Thursday 05 March 2026 00:53:59 +0000 (0:00:00.989) 0:01:15.831 ******** 2026-03-05 00:55:08.667262 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.667272 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.667281 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.667291 | orchestrator | 2026-03-05 00:55:08.667301 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-05 00:55:08.667310 | orchestrator | Thursday 05 March 2026 00:54:00 +0000 (0:00:00.486) 0:01:16.317 ******** 2026-03-05 00:55:08.667320 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.667330 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.667340 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.667349 | orchestrator | 2026-03-05 00:55:08.667361 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-05 00:55:08.667379 | orchestrator | Thursday 05 March 2026 00:54:00 +0000 (0:00:00.510) 0:01:16.828 ******** 2026-03-05 00:55:08.667397 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.667414 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.667430 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.667445 | orchestrator | 2026-03-05 00:55:08.667461 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-05 00:55:08.667476 | orchestrator | Thursday 05 March 2026 00:54:01 +0000 (0:00:00.585) 0:01:17.414 ******** 2026-03-05 00:55:08.667491 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.667508 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.667527 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.667544 | orchestrator | 2026-03-05 00:55:08.667561 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-05 00:55:08.667577 | orchestrator | Thursday 05 March 2026 00:54:01 +0000 (0:00:00.470) 0:01:17.885 ******** 2026-03-05 00:55:08.667595 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.667611 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.667626 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.667645 | orchestrator | 2026-03-05 00:55:08.667663 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-05 00:55:08.667679 | orchestrator | Thursday 05 March 2026 00:54:02 +0000 (0:00:00.439) 0:01:18.324 ******** 2026-03-05 00:55:08.667696 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.667714 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.667730 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.667746 | orchestrator | 2026-03-05 00:55:08.667761 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-05 00:55:08.667778 | orchestrator | Thursday 05 March 2026 00:54:02 +0000 (0:00:00.356) 0:01:18.680 ******** 2026-03-05 00:55:08.667795 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.667814 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.667832 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.667849 | orchestrator | 2026-03-05 00:55:08.667867 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-05 00:55:08.667884 | orchestrator | Thursday 05 March 2026 00:54:03 +0000 (0:00:00.707) 0:01:19.391 ******** 2026-03-05 00:55:08.667902 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.667919 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.667937 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.667994 | orchestrator | 2026-03-05 00:55:08.668017 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-05 00:55:08.668035 | orchestrator | Thursday 05 March 2026 00:54:03 +0000 (0:00:00.381) 0:01:19.773 ******** 2026-03-05 00:55:08.668054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668248 | orchestrator | 2026-03-05 00:55:08.668265 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-05 00:55:08.668282 | orchestrator | Thursday 05 March 2026 00:54:04 +0000 (0:00:01.453) 0:01:21.226 ******** 2026-03-05 00:55:08.668300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668491 | orchestrator | 2026-03-05 00:55:08.668508 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-05 00:55:08.668525 | orchestrator | Thursday 05 March 2026 00:54:09 +0000 (0:00:04.514) 0:01:25.740 ******** 2026-03-05 00:55:08.668536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.668653 | orchestrator | 2026-03-05 00:55:08.668663 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:55:08.668673 | orchestrator | Thursday 05 March 2026 00:54:12 +0000 (0:00:02.900) 0:01:28.641 ******** 2026-03-05 00:55:08.668683 | orchestrator | 2026-03-05 00:55:08.668693 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:55:08.668703 | orchestrator | Thursday 05 March 2026 00:54:12 +0000 (0:00:00.066) 0:01:28.708 ******** 2026-03-05 00:55:08.668713 | orchestrator | 2026-03-05 00:55:08.668723 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:55:08.668739 | orchestrator | Thursday 05 March 2026 00:54:12 +0000 (0:00:00.065) 0:01:28.773 ******** 2026-03-05 00:55:08.668755 | orchestrator | 2026-03-05 00:55:08.668771 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-05 00:55:08.668788 | orchestrator | Thursday 05 March 2026 00:54:12 +0000 (0:00:00.069) 0:01:28.843 ******** 2026-03-05 00:55:08.668803 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:08.668820 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:08.668834 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:08.668850 | orchestrator | 2026-03-05 00:55:08.668866 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-05 00:55:08.668882 | orchestrator | Thursday 05 March 2026 00:54:15 +0000 (0:00:02.489) 0:01:31.333 ******** 2026-03-05 00:55:08.668898 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:08.668916 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:08.668933 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:08.668949 | orchestrator | 2026-03-05 00:55:08.668965 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-05 00:55:08.669013 | orchestrator | Thursday 05 March 2026 00:54:22 +0000 (0:00:07.595) 0:01:38.928 ******** 2026-03-05 00:55:08.669030 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:08.669045 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:08.669062 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:08.669079 | orchestrator | 2026-03-05 00:55:08.669094 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-05 00:55:08.669110 | orchestrator | Thursday 05 March 2026 00:54:25 +0000 (0:00:02.784) 0:01:41.713 ******** 2026-03-05 00:55:08.669127 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.669143 | orchestrator | 2026-03-05 00:55:08.669160 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-05 00:55:08.669177 | orchestrator | Thursday 05 March 2026 00:54:25 +0000 (0:00:00.155) 0:01:41.869 ******** 2026-03-05 00:55:08.669192 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.669225 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.669243 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.669260 | orchestrator | 2026-03-05 00:55:08.669289 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-05 00:55:08.669304 | orchestrator | Thursday 05 March 2026 00:54:26 +0000 (0:00:00.890) 0:01:42.760 ******** 2026-03-05 00:55:08.669319 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.669335 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.669361 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:08.669376 | orchestrator | 2026-03-05 00:55:08.669391 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-05 00:55:08.669407 | orchestrator | Thursday 05 March 2026 00:54:27 +0000 (0:00:00.712) 0:01:43.472 ******** 2026-03-05 00:55:08.669425 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.669442 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.669458 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.669475 | orchestrator | 2026-03-05 00:55:08.669491 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-05 00:55:08.669525 | orchestrator | Thursday 05 March 2026 00:54:27 +0000 (0:00:00.765) 0:01:44.238 ******** 2026-03-05 00:55:08.669542 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.669559 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.669571 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:08.669580 | orchestrator | 2026-03-05 00:55:08.669590 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-05 00:55:08.669600 | orchestrator | Thursday 05 March 2026 00:54:28 +0000 (0:00:00.981) 0:01:45.219 ******** 2026-03-05 00:55:08.669610 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.669620 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.669629 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.669639 | orchestrator | 2026-03-05 00:55:08.669649 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-05 00:55:08.669658 | orchestrator | Thursday 05 March 2026 00:54:29 +0000 (0:00:00.842) 0:01:46.062 ******** 2026-03-05 00:55:08.669668 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.669678 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.669688 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.669698 | orchestrator | 2026-03-05 00:55:08.669708 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-05 00:55:08.669718 | orchestrator | Thursday 05 March 2026 00:54:30 +0000 (0:00:00.995) 0:01:47.058 ******** 2026-03-05 00:55:08.669728 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.669738 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.669748 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.669757 | orchestrator | 2026-03-05 00:55:08.669767 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-05 00:55:08.669778 | orchestrator | Thursday 05 March 2026 00:54:31 +0000 (0:00:00.336) 0:01:47.394 ******** 2026-03-05 00:55:08.669789 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669799 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669810 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669820 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669833 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669843 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669877 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669889 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669899 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669909 | orchestrator | 2026-03-05 00:55:08.669919 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-05 00:55:08.669929 | orchestrator | Thursday 05 March 2026 00:54:32 +0000 (0:00:01.693) 0:01:49.087 ******** 2026-03-05 00:55:08.669939 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669950 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.669960 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670013 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670159 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670222 | orchestrator | 2026-03-05 00:55:08.670237 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-05 00:55:08.670254 | orchestrator | Thursday 05 March 2026 00:54:36 +0000 (0:00:04.057) 0:01:53.145 ******** 2026-03-05 00:55:08.670271 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670290 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670308 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670337 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670392 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 00:55:08.670414 | orchestrator | 2026-03-05 00:55:08.670424 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:55:08.670434 | orchestrator | Thursday 05 March 2026 00:54:39 +0000 (0:00:03.100) 0:01:56.245 ******** 2026-03-05 00:55:08.670443 | orchestrator | 2026-03-05 00:55:08.670454 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:55:08.670464 | orchestrator | Thursday 05 March 2026 00:54:39 +0000 (0:00:00.062) 0:01:56.308 ******** 2026-03-05 00:55:08.670474 | orchestrator | 2026-03-05 00:55:08.670484 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-05 00:55:08.670493 | orchestrator | Thursday 05 March 2026 00:54:40 +0000 (0:00:00.070) 0:01:56.378 ******** 2026-03-05 00:55:08.670503 | orchestrator | 2026-03-05 00:55:08.670514 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-05 00:55:08.670524 | orchestrator | Thursday 05 March 2026 00:54:40 +0000 (0:00:00.069) 0:01:56.447 ******** 2026-03-05 00:55:08.670534 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:08.670544 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:08.670554 | orchestrator | 2026-03-05 00:55:08.670564 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-05 00:55:08.670574 | orchestrator | Thursday 05 March 2026 00:54:46 +0000 (0:00:06.456) 0:02:02.904 ******** 2026-03-05 00:55:08.670584 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:08.670593 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:08.670604 | orchestrator | 2026-03-05 00:55:08.670613 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-05 00:55:08.670623 | orchestrator | Thursday 05 March 2026 00:54:53 +0000 (0:00:06.453) 0:02:09.357 ******** 2026-03-05 00:55:08.670633 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:55:08.670643 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:55:08.670652 | orchestrator | 2026-03-05 00:55:08.670662 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-05 00:55:08.670672 | orchestrator | Thursday 05 March 2026 00:54:59 +0000 (0:00:06.566) 0:02:15.923 ******** 2026-03-05 00:55:08.670682 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:55:08.670699 | orchestrator | 2026-03-05 00:55:08.670709 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-05 00:55:08.670719 | orchestrator | Thursday 05 March 2026 00:54:59 +0000 (0:00:00.182) 0:02:16.106 ******** 2026-03-05 00:55:08.670729 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.670739 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.670749 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.670759 | orchestrator | 2026-03-05 00:55:08.670770 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-05 00:55:08.670779 | orchestrator | Thursday 05 March 2026 00:55:00 +0000 (0:00:00.824) 0:02:16.930 ******** 2026-03-05 00:55:08.670790 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.670800 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.670809 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:08.670819 | orchestrator | 2026-03-05 00:55:08.670829 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-05 00:55:08.670839 | orchestrator | Thursday 05 March 2026 00:55:01 +0000 (0:00:00.792) 0:02:17.723 ******** 2026-03-05 00:55:08.670849 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.670859 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.670869 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.670879 | orchestrator | 2026-03-05 00:55:08.670889 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-05 00:55:08.670899 | orchestrator | Thursday 05 March 2026 00:55:02 +0000 (0:00:00.781) 0:02:18.504 ******** 2026-03-05 00:55:08.670909 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:55:08.670919 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:55:08.670929 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:55:08.670939 | orchestrator | 2026-03-05 00:55:08.670950 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-05 00:55:08.670960 | orchestrator | Thursday 05 March 2026 00:55:02 +0000 (0:00:00.695) 0:02:19.200 ******** 2026-03-05 00:55:08.670995 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.671006 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.671017 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.671035 | orchestrator | 2026-03-05 00:55:08.671053 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-05 00:55:08.671071 | orchestrator | Thursday 05 March 2026 00:55:04 +0000 (0:00:01.130) 0:02:20.330 ******** 2026-03-05 00:55:08.671088 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:55:08.671104 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:55:08.671121 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:55:08.671136 | orchestrator | 2026-03-05 00:55:08.671153 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:55:08.671172 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-05 00:55:08.671190 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-05 00:55:08.671220 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-05 00:55:08.671247 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:55:08.671268 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:55:08.671286 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 00:55:08.671304 | orchestrator | 2026-03-05 00:55:08.671322 | orchestrator | 2026-03-05 00:55:08.671341 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:55:08.671359 | orchestrator | Thursday 05 March 2026 00:55:05 +0000 (0:00:01.312) 0:02:21.642 ******** 2026-03-05 00:55:08.671391 | orchestrator | =============================================================================== 2026-03-05 00:55:08.671409 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.67s 2026-03-05 00:55:08.671424 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.65s 2026-03-05 00:55:08.671434 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.05s 2026-03-05 00:55:08.671444 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.35s 2026-03-05 00:55:08.671454 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.95s 2026-03-05 00:55:08.671464 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.51s 2026-03-05 00:55:08.671474 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.06s 2026-03-05 00:55:08.671484 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.10s 2026-03-05 00:55:08.671494 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.08s 2026-03-05 00:55:08.671504 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.90s 2026-03-05 00:55:08.671514 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.03s 2026-03-05 00:55:08.671525 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2026-03-05 00:55:08.671534 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.66s 2026-03-05 00:55:08.671544 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-03-05 00:55:08.671557 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.45s 2026-03-05 00:55:08.671574 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.34s 2026-03-05 00:55:08.671589 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.31s 2026-03-05 00:55:08.671606 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.28s 2026-03-05 00:55:08.671623 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.20s 2026-03-05 00:55:08.671634 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.13s 2026-03-05 00:55:08.671644 | orchestrator | 2026-03-05 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:11.698916 | orchestrator | 2026-03-05 00:55:11 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:11.699473 | orchestrator | 2026-03-05 00:55:11 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:11.699508 | orchestrator | 2026-03-05 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:14.749358 | orchestrator | 2026-03-05 00:55:14 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:14.751420 | orchestrator | 2026-03-05 00:55:14 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:14.751478 | orchestrator | 2026-03-05 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:17.801637 | orchestrator | 2026-03-05 00:55:17 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:17.802558 | orchestrator | 2026-03-05 00:55:17 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:17.802641 | orchestrator | 2026-03-05 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:20.833970 | orchestrator | 2026-03-05 00:55:20 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:20.836571 | orchestrator | 2026-03-05 00:55:20 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:20.836630 | orchestrator | 2026-03-05 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:23.880535 | orchestrator | 2026-03-05 00:55:23 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:23.886879 | orchestrator | 2026-03-05 00:55:23 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:23.886965 | orchestrator | 2026-03-05 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:26.928261 | orchestrator | 2026-03-05 00:55:26 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:26.931116 | orchestrator | 2026-03-05 00:55:26 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:26.931469 | orchestrator | 2026-03-05 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:29.978548 | orchestrator | 2026-03-05 00:55:29 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:29.981896 | orchestrator | 2026-03-05 00:55:29 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:29.981968 | orchestrator | 2026-03-05 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:33.034910 | orchestrator | 2026-03-05 00:55:33 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:33.037297 | orchestrator | 2026-03-05 00:55:33 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:33.037371 | orchestrator | 2026-03-05 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:36.087407 | orchestrator | 2026-03-05 00:55:36 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:36.087501 | orchestrator | 2026-03-05 00:55:36 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:36.087514 | orchestrator | 2026-03-05 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:39.141533 | orchestrator | 2026-03-05 00:55:39 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:39.142601 | orchestrator | 2026-03-05 00:55:39 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:39.142679 | orchestrator | 2026-03-05 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:42.192935 | orchestrator | 2026-03-05 00:55:42 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:42.194674 | orchestrator | 2026-03-05 00:55:42 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:42.194982 | orchestrator | 2026-03-05 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:45.241270 | orchestrator | 2026-03-05 00:55:45 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:45.244258 | orchestrator | 2026-03-05 00:55:45 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:45.244327 | orchestrator | 2026-03-05 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:48.289870 | orchestrator | 2026-03-05 00:55:48 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:48.292152 | orchestrator | 2026-03-05 00:55:48 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:48.292280 | orchestrator | 2026-03-05 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:51.339239 | orchestrator | 2026-03-05 00:55:51 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:51.340443 | orchestrator | 2026-03-05 00:55:51 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:51.340541 | orchestrator | 2026-03-05 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:54.392868 | orchestrator | 2026-03-05 00:55:54 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:54.394694 | orchestrator | 2026-03-05 00:55:54 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:54.395065 | orchestrator | 2026-03-05 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:55:57.452865 | orchestrator | 2026-03-05 00:55:57 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:55:57.455191 | orchestrator | 2026-03-05 00:55:57 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:55:57.455620 | orchestrator | 2026-03-05 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:00.505180 | orchestrator | 2026-03-05 00:56:00 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:00.505483 | orchestrator | 2026-03-05 00:56:00 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:00.505507 | orchestrator | 2026-03-05 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:03.557923 | orchestrator | 2026-03-05 00:56:03 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:03.558306 | orchestrator | 2026-03-05 00:56:03 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:03.558336 | orchestrator | 2026-03-05 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:06.600996 | orchestrator | 2026-03-05 00:56:06 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:06.601759 | orchestrator | 2026-03-05 00:56:06 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:06.601842 | orchestrator | 2026-03-05 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:09.640927 | orchestrator | 2026-03-05 00:56:09 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:09.643144 | orchestrator | 2026-03-05 00:56:09 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:09.643211 | orchestrator | 2026-03-05 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:12.688750 | orchestrator | 2026-03-05 00:56:12 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:12.688985 | orchestrator | 2026-03-05 00:56:12 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:12.689075 | orchestrator | 2026-03-05 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:15.723565 | orchestrator | 2026-03-05 00:56:15 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:15.724446 | orchestrator | 2026-03-05 00:56:15 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:15.724491 | orchestrator | 2026-03-05 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:18.762426 | orchestrator | 2026-03-05 00:56:18 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:18.762553 | orchestrator | 2026-03-05 00:56:18 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:18.762577 | orchestrator | 2026-03-05 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:21.801418 | orchestrator | 2026-03-05 00:56:21 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:21.801510 | orchestrator | 2026-03-05 00:56:21 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:21.801516 | orchestrator | 2026-03-05 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:24.846963 | orchestrator | 2026-03-05 00:56:24 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:24.847127 | orchestrator | 2026-03-05 00:56:24 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:24.847140 | orchestrator | 2026-03-05 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:27.886123 | orchestrator | 2026-03-05 00:56:27 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:27.886513 | orchestrator | 2026-03-05 00:56:27 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:27.889118 | orchestrator | 2026-03-05 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:30.933224 | orchestrator | 2026-03-05 00:56:30 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:30.934117 | orchestrator | 2026-03-05 00:56:30 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:30.934157 | orchestrator | 2026-03-05 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:33.970669 | orchestrator | 2026-03-05 00:56:33 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:33.972151 | orchestrator | 2026-03-05 00:56:33 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:33.972227 | orchestrator | 2026-03-05 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:37.035147 | orchestrator | 2026-03-05 00:56:37 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:37.035595 | orchestrator | 2026-03-05 00:56:37 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:37.035621 | orchestrator | 2026-03-05 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:40.088072 | orchestrator | 2026-03-05 00:56:40 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:40.089566 | orchestrator | 2026-03-05 00:56:40 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:40.089616 | orchestrator | 2026-03-05 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:43.144059 | orchestrator | 2026-03-05 00:56:43 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:43.145685 | orchestrator | 2026-03-05 00:56:43 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:43.145767 | orchestrator | 2026-03-05 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:46.202393 | orchestrator | 2026-03-05 00:56:46 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:46.203106 | orchestrator | 2026-03-05 00:56:46 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:46.203390 | orchestrator | 2026-03-05 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:49.259532 | orchestrator | 2026-03-05 00:56:49 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:49.259649 | orchestrator | 2026-03-05 00:56:49 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:49.259664 | orchestrator | 2026-03-05 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:52.318882 | orchestrator | 2026-03-05 00:56:52 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:52.320658 | orchestrator | 2026-03-05 00:56:52 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:52.320709 | orchestrator | 2026-03-05 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:55.365724 | orchestrator | 2026-03-05 00:56:55 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:55.366521 | orchestrator | 2026-03-05 00:56:55 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:55.366555 | orchestrator | 2026-03-05 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:56:58.416119 | orchestrator | 2026-03-05 00:56:58 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:56:58.417182 | orchestrator | 2026-03-05 00:56:58 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:56:58.417235 | orchestrator | 2026-03-05 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:01.463120 | orchestrator | 2026-03-05 00:57:01 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:01.464468 | orchestrator | 2026-03-05 00:57:01 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:01.464535 | orchestrator | 2026-03-05 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:04.517465 | orchestrator | 2026-03-05 00:57:04 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:04.517561 | orchestrator | 2026-03-05 00:57:04 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:04.517970 | orchestrator | 2026-03-05 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:07.562839 | orchestrator | 2026-03-05 00:57:07 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:07.565130 | orchestrator | 2026-03-05 00:57:07 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:07.565606 | orchestrator | 2026-03-05 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:10.614303 | orchestrator | 2026-03-05 00:57:10 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:10.616315 | orchestrator | 2026-03-05 00:57:10 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:10.616374 | orchestrator | 2026-03-05 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:13.668311 | orchestrator | 2026-03-05 00:57:13 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:13.670659 | orchestrator | 2026-03-05 00:57:13 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:13.670718 | orchestrator | 2026-03-05 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:16.722615 | orchestrator | 2026-03-05 00:57:16 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:16.725908 | orchestrator | 2026-03-05 00:57:16 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:16.726226 | orchestrator | 2026-03-05 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:19.770386 | orchestrator | 2026-03-05 00:57:19 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:19.772876 | orchestrator | 2026-03-05 00:57:19 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:19.772987 | orchestrator | 2026-03-05 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:22.814845 | orchestrator | 2026-03-05 00:57:22 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:22.819538 | orchestrator | 2026-03-05 00:57:22 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:22.819635 | orchestrator | 2026-03-05 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:25.862468 | orchestrator | 2026-03-05 00:57:25 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:25.865358 | orchestrator | 2026-03-05 00:57:25 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:25.865435 | orchestrator | 2026-03-05 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:28.918608 | orchestrator | 2026-03-05 00:57:28 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:28.920354 | orchestrator | 2026-03-05 00:57:28 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:28.920417 | orchestrator | 2026-03-05 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:31.965606 | orchestrator | 2026-03-05 00:57:31 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:31.966954 | orchestrator | 2026-03-05 00:57:31 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:31.967000 | orchestrator | 2026-03-05 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:35.015982 | orchestrator | 2026-03-05 00:57:35 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:35.016875 | orchestrator | 2026-03-05 00:57:35 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:35.016923 | orchestrator | 2026-03-05 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:38.069435 | orchestrator | 2026-03-05 00:57:38 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:38.070693 | orchestrator | 2026-03-05 00:57:38 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:38.070802 | orchestrator | 2026-03-05 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:41.110546 | orchestrator | 2026-03-05 00:57:41 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:41.111474 | orchestrator | 2026-03-05 00:57:41 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:41.111523 | orchestrator | 2026-03-05 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:44.162693 | orchestrator | 2026-03-05 00:57:44 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:44.164489 | orchestrator | 2026-03-05 00:57:44 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:44.164553 | orchestrator | 2026-03-05 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:47.204705 | orchestrator | 2026-03-05 00:57:47 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:47.206708 | orchestrator | 2026-03-05 00:57:47 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:47.206787 | orchestrator | 2026-03-05 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:50.245747 | orchestrator | 2026-03-05 00:57:50 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:50.248899 | orchestrator | 2026-03-05 00:57:50 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:50.248952 | orchestrator | 2026-03-05 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:53.286755 | orchestrator | 2026-03-05 00:57:53 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:53.289810 | orchestrator | 2026-03-05 00:57:53 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:53.289915 | orchestrator | 2026-03-05 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:56.333879 | orchestrator | 2026-03-05 00:57:56 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:56.336433 | orchestrator | 2026-03-05 00:57:56 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:56.336487 | orchestrator | 2026-03-05 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:57:59.370990 | orchestrator | 2026-03-05 00:57:59 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:57:59.371951 | orchestrator | 2026-03-05 00:57:59 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:57:59.372492 | orchestrator | 2026-03-05 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:02.415617 | orchestrator | 2026-03-05 00:58:02 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:02.417968 | orchestrator | 2026-03-05 00:58:02 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:58:02.418929 | orchestrator | 2026-03-05 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:05.462924 | orchestrator | 2026-03-05 00:58:05 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:05.465312 | orchestrator | 2026-03-05 00:58:05 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:58:05.465409 | orchestrator | 2026-03-05 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:08.508242 | orchestrator | 2026-03-05 00:58:08 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:08.510744 | orchestrator | 2026-03-05 00:58:08 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:58:08.510816 | orchestrator | 2026-03-05 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:11.553998 | orchestrator | 2026-03-05 00:58:11 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:11.557264 | orchestrator | 2026-03-05 00:58:11 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:58:11.558098 | orchestrator | 2026-03-05 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:14.590548 | orchestrator | 2026-03-05 00:58:14 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:14.591623 | orchestrator | 2026-03-05 00:58:14 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:58:14.591713 | orchestrator | 2026-03-05 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:17.639333 | orchestrator | 2026-03-05 00:58:17 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:17.641741 | orchestrator | 2026-03-05 00:58:17 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state STARTED 2026-03-05 00:58:17.641816 | orchestrator | 2026-03-05 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:20.692687 | orchestrator | 2026-03-05 00:58:20 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:20.694700 | orchestrator | 2026-03-05 00:58:20 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:20.698755 | orchestrator | 2026-03-05 00:58:20 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:20.707745 | orchestrator | 2026-03-05 00:58:20 | INFO  | Task 2ad58ffa-084a-4518-8efb-7dc814a94829 is in state SUCCESS 2026-03-05 00:58:20.709283 | orchestrator | 2026-03-05 00:58:20.709312 | orchestrator | 2026-03-05 00:58:20.709318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 00:58:20.709324 | orchestrator | 2026-03-05 00:58:20.709330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 00:58:20.709335 | orchestrator | Thursday 05 March 2026 00:51:36 +0000 (0:00:00.272) 0:00:00.272 ******** 2026-03-05 00:58:20.709341 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.709347 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.709352 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.709356 | orchestrator | 2026-03-05 00:58:20.709362 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 00:58:20.709367 | orchestrator | Thursday 05 March 2026 00:51:37 +0000 (0:00:00.399) 0:00:00.672 ******** 2026-03-05 00:58:20.709376 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-05 00:58:20.709384 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-05 00:58:20.709393 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-05 00:58:20.709401 | orchestrator | 2026-03-05 00:58:20.709408 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-05 00:58:20.709415 | orchestrator | 2026-03-05 00:58:20.709424 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-05 00:58:20.709432 | orchestrator | Thursday 05 March 2026 00:51:37 +0000 (0:00:00.441) 0:00:01.113 ******** 2026-03-05 00:58:20.709440 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.709448 | orchestrator | 2026-03-05 00:58:20.709456 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-05 00:58:20.709464 | orchestrator | Thursday 05 March 2026 00:51:38 +0000 (0:00:00.597) 0:00:01.711 ******** 2026-03-05 00:58:20.709471 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.709480 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.709485 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.709490 | orchestrator | 2026-03-05 00:58:20.709495 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-05 00:58:20.709500 | orchestrator | Thursday 05 March 2026 00:51:38 +0000 (0:00:00.694) 0:00:02.406 ******** 2026-03-05 00:58:20.709505 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.709531 | orchestrator | 2026-03-05 00:58:20.709536 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-05 00:58:20.709541 | orchestrator | Thursday 05 March 2026 00:51:39 +0000 (0:00:00.678) 0:00:03.084 ******** 2026-03-05 00:58:20.709548 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.709556 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.709563 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.709571 | orchestrator | 2026-03-05 00:58:20.709578 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-05 00:58:20.709585 | orchestrator | Thursday 05 March 2026 00:51:40 +0000 (0:00:00.706) 0:00:03.791 ******** 2026-03-05 00:58:20.709592 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:58:20.709599 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:58:20.709607 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:58:20.709683 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:58:20.709690 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:58:20.709695 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-05 00:58:20.709700 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-05 00:58:20.709706 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-05 00:58:20.709710 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-05 00:58:20.709715 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-05 00:58:20.709720 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-05 00:58:20.709724 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-05 00:58:20.709729 | orchestrator | 2026-03-05 00:58:20.709733 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-05 00:58:20.709738 | orchestrator | Thursday 05 March 2026 00:51:42 +0000 (0:00:02.583) 0:00:06.375 ******** 2026-03-05 00:58:20.709742 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-05 00:58:20.709748 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-05 00:58:20.709752 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-05 00:58:20.709757 | orchestrator | 2026-03-05 00:58:20.709762 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-05 00:58:20.709766 | orchestrator | Thursday 05 March 2026 00:51:43 +0000 (0:00:00.863) 0:00:07.238 ******** 2026-03-05 00:58:20.709771 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-05 00:58:20.709776 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-05 00:58:20.709780 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-05 00:58:20.709785 | orchestrator | 2026-03-05 00:58:20.709790 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-05 00:58:20.709794 | orchestrator | Thursday 05 March 2026 00:51:45 +0000 (0:00:01.858) 0:00:09.096 ******** 2026-03-05 00:58:20.709799 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-05 00:58:20.709804 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-05 00:58:20.709870 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.709881 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.709888 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-05 00:58:20.709895 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.709901 | orchestrator | 2026-03-05 00:58:20.709951 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-05 00:58:20.709960 | orchestrator | Thursday 05 March 2026 00:51:46 +0000 (0:00:01.077) 0:00:10.174 ******** 2026-03-05 00:58:20.709976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.709989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.710207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.710228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.710262 | orchestrator | 2026-03-05 00:58:20.710272 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-05 00:58:20.710281 | orchestrator | Thursday 05 March 2026 00:51:49 +0000 (0:00:02.679) 0:00:12.854 ******** 2026-03-05 00:58:20.710314 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.710323 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.710332 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.710339 | orchestrator | 2026-03-05 00:58:20.710347 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-05 00:58:20.710355 | orchestrator | Thursday 05 March 2026 00:51:50 +0000 (0:00:01.329) 0:00:14.184 ******** 2026-03-05 00:58:20.710363 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-05 00:58:20.710371 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-05 00:58:20.710378 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-05 00:58:20.710386 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-05 00:58:20.710393 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-05 00:58:20.710401 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-05 00:58:20.710409 | orchestrator | 2026-03-05 00:58:20.710416 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-05 00:58:20.710424 | orchestrator | Thursday 05 March 2026 00:51:54 +0000 (0:00:03.317) 0:00:17.502 ******** 2026-03-05 00:58:20.710431 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.710439 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.710447 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.710455 | orchestrator | 2026-03-05 00:58:20.710463 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-05 00:58:20.710472 | orchestrator | Thursday 05 March 2026 00:51:56 +0000 (0:00:02.016) 0:00:19.518 ******** 2026-03-05 00:58:20.710480 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.710487 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.710495 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.710574 | orchestrator | 2026-03-05 00:58:20.710584 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-05 00:58:20.710592 | orchestrator | Thursday 05 March 2026 00:51:59 +0000 (0:00:03.005) 0:00:22.523 ******** 2026-03-05 00:58:20.710600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.710617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.710658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.710674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:58:20.710682 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.710690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.710699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.710708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.710721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:58:20.710749 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.710758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.710770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.710778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.710786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:58:20.710794 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.710802 | orchestrator | 2026-03-05 00:58:20.710810 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-05 00:58:20.710818 | orchestrator | Thursday 05 March 2026 00:52:00 +0000 (0:00:00.987) 0:00:23.511 ******** 2026-03-05 00:58:20.710826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.710889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.710897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:58:20.710922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:58:20.710935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.710952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b', '__omit_place_holder__d02fc92f433a13e47c817f517c8ad6c1d447d27b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-05 00:58:20.710960 | orchestrator | 2026-03-05 00:58:20.710968 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-05 00:58:20.710976 | orchestrator | Thursday 05 March 2026 00:52:04 +0000 (0:00:04.402) 0:00:27.913 ******** 2026-03-05 00:58:20.710984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.710992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.711147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.711161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.711168 | orchestrator | 2026-03-05 00:58:20.711175 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-05 00:58:20.711182 | orchestrator | Thursday 05 March 2026 00:52:08 +0000 (0:00:03.552) 0:00:31.466 ******** 2026-03-05 00:58:20.711190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-05 00:58:20.711204 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-05 00:58:20.711212 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-05 00:58:20.711219 | orchestrator | 2026-03-05 00:58:20.711227 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-05 00:58:20.711235 | orchestrator | Thursday 05 March 2026 00:52:10 +0000 (0:00:02.188) 0:00:33.655 ******** 2026-03-05 00:58:20.711242 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-05 00:58:20.711250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-05 00:58:20.711259 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-05 00:58:20.711265 | orchestrator | 2026-03-05 00:58:20.711273 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-05 00:58:20.711280 | orchestrator | Thursday 05 March 2026 00:52:15 +0000 (0:00:05.391) 0:00:39.046 ******** 2026-03-05 00:58:20.711287 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.711295 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.711308 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.711315 | orchestrator | 2026-03-05 00:58:20.711323 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-05 00:58:20.711331 | orchestrator | Thursday 05 March 2026 00:52:16 +0000 (0:00:01.187) 0:00:40.234 ******** 2026-03-05 00:58:20.711340 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-05 00:58:20.711350 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-05 00:58:20.711421 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-05 00:58:20.711432 | orchestrator | 2026-03-05 00:58:20.711440 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-05 00:58:20.711447 | orchestrator | Thursday 05 March 2026 00:52:20 +0000 (0:00:03.608) 0:00:43.842 ******** 2026-03-05 00:58:20.711455 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-05 00:58:20.711463 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-05 00:58:20.711471 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-05 00:58:20.711479 | orchestrator | 2026-03-05 00:58:20.711487 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-05 00:58:20.711503 | orchestrator | Thursday 05 March 2026 00:52:24 +0000 (0:00:03.834) 0:00:47.677 ******** 2026-03-05 00:58:20.711512 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-05 00:58:20.711521 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-05 00:58:20.711530 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-05 00:58:20.711537 | orchestrator | 2026-03-05 00:58:20.711544 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-05 00:58:20.711551 | orchestrator | Thursday 05 March 2026 00:52:26 +0000 (0:00:01.796) 0:00:49.474 ******** 2026-03-05 00:58:20.711559 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-05 00:58:20.711567 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-05 00:58:20.711575 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-05 00:58:20.711582 | orchestrator | 2026-03-05 00:58:20.711588 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-05 00:58:20.711596 | orchestrator | Thursday 05 March 2026 00:52:29 +0000 (0:00:03.036) 0:00:52.510 ******** 2026-03-05 00:58:20.711603 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.711610 | orchestrator | 2026-03-05 00:58:20.711617 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-05 00:58:20.711624 | orchestrator | Thursday 05 March 2026 00:52:31 +0000 (0:00:01.976) 0:00:54.486 ******** 2026-03-05 00:58:20.711633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.711773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.711781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.711795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.711802 | orchestrator | 2026-03-05 00:58:20.711835 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-05 00:58:20.711843 | orchestrator | Thursday 05 March 2026 00:52:35 +0000 (0:00:04.422) 0:00:58.909 ******** 2026-03-05 00:58:20.711856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.711869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.711877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.711885 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.711893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.711900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.711916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.711924 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.712012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712051 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.712059 | orchestrator | 2026-03-05 00:58:20.712065 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-05 00:58:20.712073 | orchestrator | Thursday 05 March 2026 00:52:36 +0000 (0:00:00.571) 0:00:59.481 ******** 2026-03-05 00:58:20.712155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712183 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.712188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712227 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.712232 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.712238 | orchestrator | 2026-03-05 00:58:20.712244 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-05 00:58:20.712269 | orchestrator | Thursday 05 March 2026 00:52:36 +0000 (0:00:00.781) 0:01:00.263 ******** 2026-03-05 00:58:20.712279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712337 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.712342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712359 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.712369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712404 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.712412 | orchestrator | 2026-03-05 00:58:20.712421 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-05 00:58:20.712429 | orchestrator | Thursday 05 March 2026 00:52:37 +0000 (0:00:00.874) 0:01:01.137 ******** 2026-03-05 00:58:20.712438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712459 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.712465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712567 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.712576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.712585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.712593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.712601 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.712610 | orchestrator | 2026-03-05 00:58:20.712618 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-05 00:58:20.712627 | orchestrator | Thursday 05 March 2026 00:52:38 +0000 (0:00:00.766) 0:01:01.904 ******** 2026-03-05 00:58:20.712635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.715463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.715606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.715617 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.715637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.715644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.715649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.715654 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.715659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.715733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.715774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.715782 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.715786 | orchestrator | 2026-03-05 00:58:20.715792 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-05 00:58:20.715797 | orchestrator | Thursday 05 March 2026 00:52:39 +0000 (0:00:00.687) 0:01:02.592 ******** 2026-03-05 00:58:20.715805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.715810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.715815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.715820 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.715825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.715834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.715844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.715850 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.715855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.715860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.715865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.715869 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.715874 | orchestrator | 2026-03-05 00:58:20.715879 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-05 00:58:20.715884 | orchestrator | Thursday 05 March 2026 00:52:40 +0000 (0:00:01.155) 0:01:03.747 ******** 2026-03-05 00:58:20.715888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.715897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.715923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.715929 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.715934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.715941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.715946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.715951 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.715956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.715965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.715970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.715974 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.715979 | orchestrator | 2026-03-05 00:58:20.715984 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-05 00:58:20.715992 | orchestrator | Thursday 05 March 2026 00:52:40 +0000 (0:00:00.642) 0:01:04.390 ******** 2026-03-05 00:58:20.715996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.716004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.716009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.716014 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.716019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.716049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-05 00:58:20.716070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.716126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-05 00:58:20.716133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.716142 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.716148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-05 00:58:20.716153 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.716159 | orchestrator | 2026-03-05 00:58:20.716164 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-05 00:58:20.716169 | orchestrator | Thursday 05 March 2026 00:52:41 +0000 (0:00:00.911) 0:01:05.301 ******** 2026-03-05 00:58:20.716175 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-05 00:58:20.716181 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-05 00:58:20.716193 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-05 00:58:20.716218 | orchestrator | 2026-03-05 00:58:20.716223 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-05 00:58:20.716227 | orchestrator | Thursday 05 March 2026 00:52:43 +0000 (0:00:01.972) 0:01:07.274 ******** 2026-03-05 00:58:20.716232 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-05 00:58:20.716237 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-05 00:58:20.716242 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-05 00:58:20.716246 | orchestrator | 2026-03-05 00:58:20.716251 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-05 00:58:20.716256 | orchestrator | Thursday 05 March 2026 00:52:45 +0000 (0:00:01.539) 0:01:08.814 ******** 2026-03-05 00:58:20.716260 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 00:58:20.716265 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 00:58:20.716269 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 00:58:20.716288 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 00:58:20.716293 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.716297 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 00:58:20.716302 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.716307 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 00:58:20.716319 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.716327 | orchestrator | 2026-03-05 00:58:20.716332 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-05 00:58:20.716336 | orchestrator | Thursday 05 March 2026 00:52:46 +0000 (0:00:00.912) 0:01:09.726 ******** 2026-03-05 00:58:20.716345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.716419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.716431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-05 00:58:20.716445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.716453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.716461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-05 00:58:20.716468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.716482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.716490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-05 00:58:20.716503 | orchestrator | 2026-03-05 00:58:20.716515 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-05 00:58:20.716520 | orchestrator | Thursday 05 March 2026 00:52:49 +0000 (0:00:02.748) 0:01:12.474 ******** 2026-03-05 00:58:20.716525 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.716530 | orchestrator | 2026-03-05 00:58:20.716534 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-05 00:58:20.716539 | orchestrator | Thursday 05 March 2026 00:52:49 +0000 (0:00:00.500) 0:01:12.975 ******** 2026-03-05 00:58:20.716545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-05 00:58:20.716551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.716557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-05 00:58:20.716582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.716587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-05 00:58:20.716606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.716618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716638 | orchestrator | 2026-03-05 00:58:20.716643 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-05 00:58:20.716649 | orchestrator | Thursday 05 March 2026 00:52:53 +0000 (0:00:03.957) 0:01:16.933 ******** 2026-03-05 00:58:20.716656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-05 00:58:20.716664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-05 00:58:20.716671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.716681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.716689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716720 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.716728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716736 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.716743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-05 00:58:20.716752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.716763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.716787 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.716794 | orchestrator | 2026-03-05 00:58:20.716802 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-05 00:58:20.716809 | orchestrator | Thursday 05 March 2026 00:52:54 +0000 (0:00:01.319) 0:01:18.252 ******** 2026-03-05 00:58:20.716817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:58:20.716826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:58:20.716834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:58:20.716842 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.716850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:58:20.716857 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.716865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:58:20.716957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-05 00:58:20.716967 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.717002 | orchestrator | 2026-03-05 00:58:20.717008 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-05 00:58:20.717035 | orchestrator | Thursday 05 March 2026 00:52:55 +0000 (0:00:01.035) 0:01:19.287 ******** 2026-03-05 00:58:20.717040 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.717044 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.717137 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.717146 | orchestrator | 2026-03-05 00:58:20.717153 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-05 00:58:20.717160 | orchestrator | Thursday 05 March 2026 00:52:57 +0000 (0:00:01.482) 0:01:20.770 ******** 2026-03-05 00:58:20.717167 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.717174 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.717182 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.717189 | orchestrator | 2026-03-05 00:58:20.717196 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-05 00:58:20.717214 | orchestrator | Thursday 05 March 2026 00:53:00 +0000 (0:00:03.050) 0:01:23.821 ******** 2026-03-05 00:58:20.717221 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.717229 | orchestrator | 2026-03-05 00:58:20.717236 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-05 00:58:20.717244 | orchestrator | Thursday 05 March 2026 00:53:01 +0000 (0:00:01.001) 0:01:24.822 ******** 2026-03-05 00:58:20.717264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.717282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.717298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.717328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717338 | orchestrator | 2026-03-05 00:58:20.717343 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-05 00:58:20.717348 | orchestrator | Thursday 05 March 2026 00:53:05 +0000 (0:00:03.772) 0:01:28.594 ******** 2026-03-05 00:58:20.717353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.717365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717387 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.717398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.717407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717451 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.717457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.717465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.717475 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.717480 | orchestrator | 2026-03-05 00:58:20.717484 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-05 00:58:20.717489 | orchestrator | Thursday 05 March 2026 00:53:05 +0000 (0:00:00.691) 0:01:29.285 ******** 2026-03-05 00:58:20.717497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:58:20.717502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:58:20.717507 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.717512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:58:20.717516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:58:20.717521 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.717526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:58:20.717530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-05 00:58:20.717538 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.717543 | orchestrator | 2026-03-05 00:58:20.717548 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-05 00:58:20.717552 | orchestrator | Thursday 05 March 2026 00:53:06 +0000 (0:00:01.119) 0:01:30.405 ******** 2026-03-05 00:58:20.717557 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.717562 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.717566 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.717571 | orchestrator | 2026-03-05 00:58:20.717576 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-05 00:58:20.717580 | orchestrator | Thursday 05 March 2026 00:53:08 +0000 (0:00:01.443) 0:01:31.848 ******** 2026-03-05 00:58:20.717612 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.717616 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.717621 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.717625 | orchestrator | 2026-03-05 00:58:20.717630 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-05 00:58:20.717635 | orchestrator | Thursday 05 March 2026 00:53:10 +0000 (0:00:02.115) 0:01:33.964 ******** 2026-03-05 00:58:20.717639 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.717644 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.717697 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.717702 | orchestrator | 2026-03-05 00:58:20.717707 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-05 00:58:20.717712 | orchestrator | Thursday 05 March 2026 00:53:10 +0000 (0:00:00.354) 0:01:34.318 ******** 2026-03-05 00:58:20.717716 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.717733 | orchestrator | 2026-03-05 00:58:20.717738 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-05 00:58:20.717743 | orchestrator | Thursday 05 March 2026 00:53:11 +0000 (0:00:00.827) 0:01:35.146 ******** 2026-03-05 00:58:20.720339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-05 00:58:20.720393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-05 00:58:20.720400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-05 00:58:20.720414 | orchestrator | 2026-03-05 00:58:20.720419 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-05 00:58:20.720424 | orchestrator | Thursday 05 March 2026 00:53:14 +0000 (0:00:02.470) 0:01:37.617 ******** 2026-03-05 00:58:20.720430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-05 00:58:20.720435 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.720440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-05 00:58:20.720445 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.720480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-05 00:58:20.720500 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.720505 | orchestrator | 2026-03-05 00:58:20.720510 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-05 00:58:20.720515 | orchestrator | Thursday 05 March 2026 00:53:15 +0000 (0:00:01.290) 0:01:38.907 ******** 2026-03-05 00:58:20.720523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:58:20.720535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:58:20.720541 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.720546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:58:20.720551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:58:20.720555 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.720560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:58:20.720565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-05 00:58:20.720569 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.720574 | orchestrator | 2026-03-05 00:58:20.720579 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-05 00:58:20.720583 | orchestrator | Thursday 05 March 2026 00:53:17 +0000 (0:00:01.745) 0:01:40.653 ******** 2026-03-05 00:58:20.720588 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.720593 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.720597 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.720602 | orchestrator | 2026-03-05 00:58:20.720606 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-05 00:58:20.720611 | orchestrator | Thursday 05 March 2026 00:53:17 +0000 (0:00:00.559) 0:01:41.212 ******** 2026-03-05 00:58:20.720616 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.720621 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.720627 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.720632 | orchestrator | 2026-03-05 00:58:20.720637 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-05 00:58:20.720647 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:01.072) 0:01:42.284 ******** 2026-03-05 00:58:20.720653 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.720661 | orchestrator | 2026-03-05 00:58:20.720666 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-05 00:58:20.720672 | orchestrator | Thursday 05 March 2026 00:53:19 +0000 (0:00:00.749) 0:01:43.034 ******** 2026-03-05 00:58:20.720681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.720688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.720693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.720744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720768 | orchestrator | 2026-03-05 00:58:20.720774 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-05 00:58:20.720779 | orchestrator | Thursday 05 March 2026 00:53:23 +0000 (0:00:04.259) 0:01:47.294 ******** 2026-03-05 00:58:20.720785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.720791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720813 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.720820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.720826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720844 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.720849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.720862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.720879 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.720884 | orchestrator | 2026-03-05 00:58:20.720888 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-05 00:58:20.720893 | orchestrator | Thursday 05 March 2026 00:53:25 +0000 (0:00:01.648) 0:01:48.943 ******** 2026-03-05 00:58:20.720898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:58:20.720903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:58:20.720909 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.720914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:58:20.720919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:58:20.720928 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.720933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:58:20.720938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-05 00:58:20.720956 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.720961 | orchestrator | 2026-03-05 00:58:20.720966 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-05 00:58:20.720971 | orchestrator | Thursday 05 March 2026 00:53:27 +0000 (0:00:01.754) 0:01:50.698 ******** 2026-03-05 00:58:20.720975 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.720980 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.720984 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.720989 | orchestrator | 2026-03-05 00:58:20.720993 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-05 00:58:20.720998 | orchestrator | Thursday 05 March 2026 00:53:28 +0000 (0:00:01.375) 0:01:52.073 ******** 2026-03-05 00:58:20.721002 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.721007 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.721012 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.721016 | orchestrator | 2026-03-05 00:58:20.721024 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-05 00:58:20.721028 | orchestrator | Thursday 05 March 2026 00:53:30 +0000 (0:00:02.071) 0:01:54.144 ******** 2026-03-05 00:58:20.721033 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.721038 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.721042 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.721047 | orchestrator | 2026-03-05 00:58:20.721051 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-05 00:58:20.721056 | orchestrator | Thursday 05 March 2026 00:53:31 +0000 (0:00:00.489) 0:01:54.634 ******** 2026-03-05 00:58:20.721060 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.721065 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.721069 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.721074 | orchestrator | 2026-03-05 00:58:20.721123 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-05 00:58:20.721129 | orchestrator | Thursday 05 March 2026 00:53:31 +0000 (0:00:00.269) 0:01:54.903 ******** 2026-03-05 00:58:20.721147 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.721152 | orchestrator | 2026-03-05 00:58:20.721156 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-05 00:58:20.721161 | orchestrator | Thursday 05 March 2026 00:53:32 +0000 (0:00:00.691) 0:01:55.595 ******** 2026-03-05 00:58:20.721179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 00:58:20.721185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:58:20.721194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 00:58:20.721247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:58:20.721252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 00:58:20.721290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:58:20.721295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721327 | orchestrator | 2026-03-05 00:58:20.721332 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-05 00:58:20.721337 | orchestrator | Thursday 05 March 2026 00:53:36 +0000 (0:00:04.342) 0:01:59.937 ******** 2026-03-05 00:58:20.721342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 00:58:20.721349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:58:20.721354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721383 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.721388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 00:58:20.721396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:58:20.721403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 00:58:20.721415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 00:58:20.721420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721462 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.721467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.721480 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.721484 | orchestrator | 2026-03-05 00:58:20.721489 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-05 00:58:20.721493 | orchestrator | Thursday 05 March 2026 00:53:37 +0000 (0:00:00.849) 0:02:00.786 ******** 2026-03-05 00:58:20.721498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:58:20.721503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:58:20.721511 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.721515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:58:20.721521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:58:20.721525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:58:20.721529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-05 00:58:20.721534 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.721538 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.721542 | orchestrator | 2026-03-05 00:58:20.721546 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-05 00:58:20.721550 | orchestrator | Thursday 05 March 2026 00:53:38 +0000 (0:00:01.056) 0:02:01.843 ******** 2026-03-05 00:58:20.721554 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.721558 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.721563 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.721567 | orchestrator | 2026-03-05 00:58:20.721571 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-05 00:58:20.721575 | orchestrator | Thursday 05 March 2026 00:53:39 +0000 (0:00:01.520) 0:02:03.363 ******** 2026-03-05 00:58:20.721579 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.721583 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.721587 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.721591 | orchestrator | 2026-03-05 00:58:20.721595 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-05 00:58:20.721599 | orchestrator | Thursday 05 March 2026 00:53:41 +0000 (0:00:01.852) 0:02:05.215 ******** 2026-03-05 00:58:20.721604 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.721608 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.721612 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.721616 | orchestrator | 2026-03-05 00:58:20.721620 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-05 00:58:20.721624 | orchestrator | Thursday 05 March 2026 00:53:42 +0000 (0:00:00.567) 0:02:05.783 ******** 2026-03-05 00:58:20.721628 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.721632 | orchestrator | 2026-03-05 00:58:20.721636 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-05 00:58:20.721640 | orchestrator | Thursday 05 March 2026 00:53:43 +0000 (0:00:01.059) 0:02:06.843 ******** 2026-03-05 00:58:20.721649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 00:58:20.721661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.721666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 00:58:20.721777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.721786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 00:58:20.721799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.721804 | orchestrator | 2026-03-05 00:58:20.721808 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-05 00:58:20.721813 | orchestrator | Thursday 05 March 2026 00:53:49 +0000 (0:00:05.767) 0:02:12.610 ******** 2026-03-05 00:58:20.721817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 00:58:20.721826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.721834 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.721839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 00:58:20.721846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.721853 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.721860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 00:58:20.721867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.721874 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.721879 | orchestrator | 2026-03-05 00:58:20.721883 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-05 00:58:20.721887 | orchestrator | Thursday 05 March 2026 00:53:53 +0000 (0:00:04.495) 0:02:17.106 ******** 2026-03-05 00:58:20.721892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:58:20.721898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:58:20.721903 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.721907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:58:20.721912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:58:20.721916 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.721921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:58:20.721927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-05 00:58:20.721932 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.721936 | orchestrator | 2026-03-05 00:58:20.721940 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-05 00:58:20.721944 | orchestrator | Thursday 05 March 2026 00:53:58 +0000 (0:00:05.254) 0:02:22.361 ******** 2026-03-05 00:58:20.721948 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.721953 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.721957 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.721961 | orchestrator | 2026-03-05 00:58:20.721965 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-05 00:58:20.721969 | orchestrator | Thursday 05 March 2026 00:54:00 +0000 (0:00:01.488) 0:02:23.849 ******** 2026-03-05 00:58:20.721973 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.721978 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.721982 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.721986 | orchestrator | 2026-03-05 00:58:20.721992 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-05 00:58:20.721996 | orchestrator | Thursday 05 March 2026 00:54:02 +0000 (0:00:02.449) 0:02:26.299 ******** 2026-03-05 00:58:20.722000 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722004 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722008 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722044 | orchestrator | 2026-03-05 00:58:20.722050 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-05 00:58:20.722055 | orchestrator | Thursday 05 March 2026 00:54:03 +0000 (0:00:00.658) 0:02:26.957 ******** 2026-03-05 00:58:20.722059 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.722063 | orchestrator | 2026-03-05 00:58:20.722067 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-05 00:58:20.722071 | orchestrator | Thursday 05 March 2026 00:54:04 +0000 (0:00:00.841) 0:02:27.799 ******** 2026-03-05 00:58:20.722093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 00:58:20.722100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 00:58:20.722111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 00:58:20.722119 | orchestrator | 2026-03-05 00:58:20.722125 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-05 00:58:20.722132 | orchestrator | Thursday 05 March 2026 00:54:08 +0000 (0:00:03.814) 0:02:31.613 ******** 2026-03-05 00:58:20.722139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 00:58:20.722150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 00:58:20.722157 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722163 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 00:58:20.722181 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722188 | orchestrator | 2026-03-05 00:58:20.722194 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-05 00:58:20.722202 | orchestrator | Thursday 05 March 2026 00:54:08 +0000 (0:00:00.686) 0:02:32.300 ******** 2026-03-05 00:58:20.722208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:58:20.722213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:58:20.722221 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:58:20.722229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:58:20.722233 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:58:20.722242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-05 00:58:20.722246 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722250 | orchestrator | 2026-03-05 00:58:20.722254 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-05 00:58:20.722258 | orchestrator | Thursday 05 March 2026 00:54:09 +0000 (0:00:00.668) 0:02:32.969 ******** 2026-03-05 00:58:20.722263 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.722267 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.722271 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.722275 | orchestrator | 2026-03-05 00:58:20.722279 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-05 00:58:20.722283 | orchestrator | Thursday 05 March 2026 00:54:10 +0000 (0:00:01.425) 0:02:34.394 ******** 2026-03-05 00:58:20.722287 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.722291 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.722296 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.722300 | orchestrator | 2026-03-05 00:58:20.722304 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-05 00:58:20.722308 | orchestrator | Thursday 05 March 2026 00:54:13 +0000 (0:00:02.167) 0:02:36.561 ******** 2026-03-05 00:58:20.722312 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722316 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722320 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722325 | orchestrator | 2026-03-05 00:58:20.722329 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-05 00:58:20.722333 | orchestrator | Thursday 05 March 2026 00:54:13 +0000 (0:00:00.671) 0:02:37.232 ******** 2026-03-05 00:58:20.722337 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.722360 | orchestrator | 2026-03-05 00:58:20.722364 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-05 00:58:20.722368 | orchestrator | Thursday 05 March 2026 00:54:14 +0000 (0:00:00.914) 0:02:38.146 ******** 2026-03-05 00:58:20.722380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 00:58:20.722390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 00:58:20.722402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 00:58:20.722410 | orchestrator | 2026-03-05 00:58:20.722415 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-05 00:58:20.722420 | orchestrator | Thursday 05 March 2026 00:54:18 +0000 (0:00:03.751) 0:02:41.898 ******** 2026-03-05 00:58:20.722429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 00:58:20.722435 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 00:58:20.722451 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 00:58:20.722467 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722472 | orchestrator | 2026-03-05 00:58:20.722477 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-05 00:58:20.722482 | orchestrator | Thursday 05 March 2026 00:54:19 +0000 (0:00:01.242) 0:02:43.140 ******** 2026-03-05 00:58:20.722491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:58:20.722502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:58:20.722509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:58:20.722515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:58:20.722520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-05 00:58:20.722525 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:58:20.722535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:58:20.722540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:58:20.722545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:58:20.722550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-05 00:58:20.722555 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:58:20.722571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:58:20.722576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-05 00:58:20.722583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-05 00:58:20.722588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-05 00:58:20.722594 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722598 | orchestrator | 2026-03-05 00:58:20.722603 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-05 00:58:20.722608 | orchestrator | Thursday 05 March 2026 00:54:20 +0000 (0:00:01.071) 0:02:44.212 ******** 2026-03-05 00:58:20.722613 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.722618 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.722623 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.722627 | orchestrator | 2026-03-05 00:58:20.722632 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-05 00:58:20.722637 | orchestrator | Thursday 05 March 2026 00:54:22 +0000 (0:00:01.365) 0:02:45.577 ******** 2026-03-05 00:58:20.722642 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.722647 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.722652 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.722657 | orchestrator | 2026-03-05 00:58:20.722662 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-05 00:58:20.722667 | orchestrator | Thursday 05 March 2026 00:54:24 +0000 (0:00:02.087) 0:02:47.665 ******** 2026-03-05 00:58:20.722672 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722677 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722681 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722686 | orchestrator | 2026-03-05 00:58:20.722691 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-05 00:58:20.722695 | orchestrator | Thursday 05 March 2026 00:54:24 +0000 (0:00:00.349) 0:02:48.015 ******** 2026-03-05 00:58:20.722700 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722705 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722710 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722715 | orchestrator | 2026-03-05 00:58:20.722720 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-05 00:58:20.722725 | orchestrator | Thursday 05 March 2026 00:54:25 +0000 (0:00:00.569) 0:02:48.584 ******** 2026-03-05 00:58:20.722730 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.722734 | orchestrator | 2026-03-05 00:58:20.722739 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-05 00:58:20.722743 | orchestrator | Thursday 05 March 2026 00:54:26 +0000 (0:00:01.028) 0:02:49.613 ******** 2026-03-05 00:58:20.722751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 00:58:20.722759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:58:20.722767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:58:20.722772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 00:58:20.722777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:58:20.722787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:58:20.722794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 00:58:20.722799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:58:20.722803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:58:20.722807 | orchestrator | 2026-03-05 00:58:20.722811 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-05 00:58:20.722816 | orchestrator | Thursday 05 March 2026 00:54:29 +0000 (0:00:03.696) 0:02:53.310 ******** 2026-03-05 00:58:20.722820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 00:58:20.722827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:58:20.722832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:58:20.722836 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 00:58:20.722862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:58:20.722866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:58:20.722870 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 00:58:20.722882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 00:58:20.722890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 00:58:20.722895 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722899 | orchestrator | 2026-03-05 00:58:20.722903 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-05 00:58:20.722908 | orchestrator | Thursday 05 March 2026 00:54:31 +0000 (0:00:01.182) 0:02:54.492 ******** 2026-03-05 00:58:20.722912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:58:20.722918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:58:20.722923 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.722927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:58:20.722932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:58:20.722936 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.722940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:58:20.722947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-05 00:58:20.722951 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.722955 | orchestrator | 2026-03-05 00:58:20.722960 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-05 00:58:20.722964 | orchestrator | Thursday 05 March 2026 00:54:32 +0000 (0:00:01.164) 0:02:55.656 ******** 2026-03-05 00:58:20.722968 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.722972 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.722976 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.722980 | orchestrator | 2026-03-05 00:58:20.722984 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-05 00:58:20.722989 | orchestrator | Thursday 05 March 2026 00:54:33 +0000 (0:00:01.562) 0:02:57.219 ******** 2026-03-05 00:58:20.722993 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.722997 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.723001 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.723005 | orchestrator | 2026-03-05 00:58:20.723010 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-05 00:58:20.723014 | orchestrator | Thursday 05 March 2026 00:54:36 +0000 (0:00:02.358) 0:02:59.578 ******** 2026-03-05 00:58:20.723018 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723022 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723026 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723030 | orchestrator | 2026-03-05 00:58:20.723034 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-05 00:58:20.723038 | orchestrator | Thursday 05 March 2026 00:54:36 +0000 (0:00:00.569) 0:03:00.147 ******** 2026-03-05 00:58:20.723043 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.723047 | orchestrator | 2026-03-05 00:58:20.723051 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-05 00:58:20.723055 | orchestrator | Thursday 05 March 2026 00:54:37 +0000 (0:00:00.999) 0:03:01.146 ******** 2026-03-05 00:58:20.723151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 00:58:20.723163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 00:58:20.723176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 00:58:20.723189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723193 | orchestrator | 2026-03-05 00:58:20.723197 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-05 00:58:20.723202 | orchestrator | Thursday 05 March 2026 00:54:41 +0000 (0:00:03.988) 0:03:05.134 ******** 2026-03-05 00:58:20.723209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 00:58:20.723216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723220 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 00:58:20.723229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723233 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 00:58:20.723249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723254 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723258 | orchestrator | 2026-03-05 00:58:20.723262 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-05 00:58:20.723266 | orchestrator | Thursday 05 March 2026 00:54:42 +0000 (0:00:01.009) 0:03:06.144 ******** 2026-03-05 00:58:20.723271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:58:20.723275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:58:20.723280 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:58:20.723288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:58:20.723292 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:58:20.723301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-05 00:58:20.723305 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723309 | orchestrator | 2026-03-05 00:58:20.723313 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-05 00:58:20.723317 | orchestrator | Thursday 05 March 2026 00:54:43 +0000 (0:00:01.060) 0:03:07.205 ******** 2026-03-05 00:58:20.723321 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.723325 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.723329 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.723334 | orchestrator | 2026-03-05 00:58:20.723338 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-05 00:58:20.723342 | orchestrator | Thursday 05 March 2026 00:54:45 +0000 (0:00:01.437) 0:03:08.642 ******** 2026-03-05 00:58:20.723346 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.723350 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.723354 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.723358 | orchestrator | 2026-03-05 00:58:20.723362 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-05 00:58:20.723366 | orchestrator | Thursday 05 March 2026 00:54:47 +0000 (0:00:02.228) 0:03:10.871 ******** 2026-03-05 00:58:20.723370 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.723377 | orchestrator | 2026-03-05 00:58:20.723381 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-05 00:58:20.723386 | orchestrator | Thursday 05 March 2026 00:54:48 +0000 (0:00:01.313) 0:03:12.185 ******** 2026-03-05 00:58:20.723392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-05 00:58:20.723400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-05 00:58:20.723421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-05 00:58:20.723439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723461 | orchestrator | 2026-03-05 00:58:20.723465 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-05 00:58:20.723469 | orchestrator | Thursday 05 March 2026 00:54:52 +0000 (0:00:03.578) 0:03:15.763 ******** 2026-03-05 00:58:20.723475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-05 00:58:20.723480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723493 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-05 00:58:20.723506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723521 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-05 00:58:20.723530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.723550 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723554 | orchestrator | 2026-03-05 00:58:20.723559 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-05 00:58:20.723563 | orchestrator | Thursday 05 March 2026 00:54:53 +0000 (0:00:00.697) 0:03:16.461 ******** 2026-03-05 00:58:20.723567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:58:20.723573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:58:20.723578 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:58:20.723586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:58:20.723590 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:58:20.723599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-05 00:58:20.723603 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723607 | orchestrator | 2026-03-05 00:58:20.723611 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-05 00:58:20.723615 | orchestrator | Thursday 05 March 2026 00:54:54 +0000 (0:00:01.344) 0:03:17.806 ******** 2026-03-05 00:58:20.723619 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.723623 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.723627 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.723631 | orchestrator | 2026-03-05 00:58:20.723636 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-05 00:58:20.723640 | orchestrator | Thursday 05 March 2026 00:54:55 +0000 (0:00:01.361) 0:03:19.167 ******** 2026-03-05 00:58:20.723647 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.723651 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.723655 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.723659 | orchestrator | 2026-03-05 00:58:20.723663 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-05 00:58:20.723667 | orchestrator | Thursday 05 March 2026 00:54:57 +0000 (0:00:02.137) 0:03:21.305 ******** 2026-03-05 00:58:20.723671 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.723676 | orchestrator | 2026-03-05 00:58:20.723680 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-05 00:58:20.723684 | orchestrator | Thursday 05 March 2026 00:54:59 +0000 (0:00:01.471) 0:03:22.776 ******** 2026-03-05 00:58:20.723688 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-05 00:58:20.723692 | orchestrator | 2026-03-05 00:58:20.723697 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-05 00:58:20.723701 | orchestrator | Thursday 05 March 2026 00:55:02 +0000 (0:00:03.390) 0:03:26.166 ******** 2026-03-05 00:58:20.723708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:58:20.723715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:58:20.723719 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:58:20.723731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:58:20.723736 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:58:20.723753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:58:20.723757 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723761 | orchestrator | 2026-03-05 00:58:20.723765 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-05 00:58:20.723769 | orchestrator | Thursday 05 March 2026 00:55:05 +0000 (0:00:02.954) 0:03:29.121 ******** 2026-03-05 00:58:20.723776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:58:20.723782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:58:20.723788 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:58:20.723804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:58:20.723809 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 00:58:20.723825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-05 00:58:20.723832 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723837 | orchestrator | 2026-03-05 00:58:20.723842 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-05 00:58:20.723847 | orchestrator | Thursday 05 March 2026 00:55:08 +0000 (0:00:02.384) 0:03:31.506 ******** 2026-03-05 00:58:20.723853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:58:20.723858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:58:20.723863 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:58:20.723875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:58:20.723880 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:58:20.723893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-05 00:58:20.723900 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723905 | orchestrator | 2026-03-05 00:58:20.723909 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-05 00:58:20.723914 | orchestrator | Thursday 05 March 2026 00:55:11 +0000 (0:00:03.018) 0:03:34.524 ******** 2026-03-05 00:58:20.723919 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.723924 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.723929 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.723933 | orchestrator | 2026-03-05 00:58:20.723938 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-05 00:58:20.723943 | orchestrator | Thursday 05 March 2026 00:55:13 +0000 (0:00:01.897) 0:03:36.422 ******** 2026-03-05 00:58:20.723948 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723953 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723958 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723963 | orchestrator | 2026-03-05 00:58:20.723968 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-05 00:58:20.723972 | orchestrator | Thursday 05 March 2026 00:55:14 +0000 (0:00:01.461) 0:03:37.884 ******** 2026-03-05 00:58:20.723977 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.723982 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.723987 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.723992 | orchestrator | 2026-03-05 00:58:20.723997 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-05 00:58:20.724002 | orchestrator | Thursday 05 March 2026 00:55:14 +0000 (0:00:00.338) 0:03:38.223 ******** 2026-03-05 00:58:20.724006 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.724011 | orchestrator | 2026-03-05 00:58:20.724016 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-05 00:58:20.724021 | orchestrator | Thursday 05 March 2026 00:55:16 +0000 (0:00:01.333) 0:03:39.557 ******** 2026-03-05 00:58:20.724026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-05 00:58:20.724035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-05 00:58:20.724045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-05 00:58:20.724050 | orchestrator | 2026-03-05 00:58:20.724054 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-05 00:58:20.724058 | orchestrator | Thursday 05 March 2026 00:55:17 +0000 (0:00:01.555) 0:03:41.112 ******** 2026-03-05 00:58:20.724062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-05 00:58:20.724067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-05 00:58:20.724071 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.724075 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.724097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-05 00:58:20.724101 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.724105 | orchestrator | 2026-03-05 00:58:20.724110 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-05 00:58:20.724114 | orchestrator | Thursday 05 March 2026 00:55:18 +0000 (0:00:00.440) 0:03:41.552 ******** 2026-03-05 00:58:20.724118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-05 00:58:20.724128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-05 00:58:20.724132 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.724137 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.724141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-05 00:58:20.724145 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.724149 | orchestrator | 2026-03-05 00:58:20.724153 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-05 00:58:20.724158 | orchestrator | Thursday 05 March 2026 00:55:19 +0000 (0:00:00.895) 0:03:42.447 ******** 2026-03-05 00:58:20.724162 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.724166 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.724170 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.724174 | orchestrator | 2026-03-05 00:58:20.724181 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-05 00:58:20.724185 | orchestrator | Thursday 05 March 2026 00:55:19 +0000 (0:00:00.419) 0:03:42.867 ******** 2026-03-05 00:58:20.724189 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.724193 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.724197 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.724201 | orchestrator | 2026-03-05 00:58:20.724205 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-05 00:58:20.724210 | orchestrator | Thursday 05 March 2026 00:55:20 +0000 (0:00:01.293) 0:03:44.160 ******** 2026-03-05 00:58:20.724214 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.724218 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.724222 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.724226 | orchestrator | 2026-03-05 00:58:20.724230 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-05 00:58:20.724234 | orchestrator | Thursday 05 March 2026 00:55:21 +0000 (0:00:00.353) 0:03:44.514 ******** 2026-03-05 00:58:20.724238 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.724242 | orchestrator | 2026-03-05 00:58:20.724247 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-05 00:58:20.724251 | orchestrator | Thursday 05 March 2026 00:55:22 +0000 (0:00:01.414) 0:03:45.928 ******** 2026-03-05 00:58:20.724255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 00:58:20.724260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:58:20.724286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.724405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 00:58:20.724416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:58:20.724443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 00:58:20.724486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.724512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:58:20.724535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.724593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724597 | orchestrator | 2026-03-05 00:58:20.724601 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-05 00:58:20.724606 | orchestrator | Thursday 05 March 2026 00:55:26 +0000 (0:00:04.489) 0:03:50.418 ******** 2026-03-05 00:58:20.724610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 00:58:20.724616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:58:20.724637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 00:58:20.724706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:58:20.724752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.724807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724816 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.724823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 00:58:20.724858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.724868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724898 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.724902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-05 00:58:20.724916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-05 00:58:20.724965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.724969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-05 00:58:20.724974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-05 00:58:20.724978 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.724982 | orchestrator | 2026-03-05 00:58:20.724987 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-05 00:58:20.724991 | orchestrator | Thursday 05 March 2026 00:55:28 +0000 (0:00:01.596) 0:03:52.014 ******** 2026-03-05 00:58:20.724995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:58:20.725001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:58:20.725009 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.725016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:58:20.725021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:58:20.725026 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.725031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:58:20.725036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-05 00:58:20.725041 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.725045 | orchestrator | 2026-03-05 00:58:20.725050 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-05 00:58:20.725055 | orchestrator | Thursday 05 March 2026 00:55:30 +0000 (0:00:02.142) 0:03:54.157 ******** 2026-03-05 00:58:20.725063 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.725068 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.725073 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.725077 | orchestrator | 2026-03-05 00:58:20.725097 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-05 00:58:20.725102 | orchestrator | Thursday 05 March 2026 00:55:32 +0000 (0:00:01.368) 0:03:55.526 ******** 2026-03-05 00:58:20.725106 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.725111 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.725116 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.725121 | orchestrator | 2026-03-05 00:58:20.725126 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-05 00:58:20.725130 | orchestrator | Thursday 05 March 2026 00:55:34 +0000 (0:00:02.174) 0:03:57.700 ******** 2026-03-05 00:58:20.725135 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.725140 | orchestrator | 2026-03-05 00:58:20.725145 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-05 00:58:20.725150 | orchestrator | Thursday 05 March 2026 00:55:35 +0000 (0:00:01.224) 0:03:58.925 ******** 2026-03-05 00:58:20.725154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.725159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.725233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.725240 | orchestrator | 2026-03-05 00:58:20.725244 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-05 00:58:20.725248 | orchestrator | Thursday 05 March 2026 00:55:39 +0000 (0:00:03.745) 0:04:02.670 ******** 2026-03-05 00:58:20.725256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.725260 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.725265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.725279 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.725283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.725292 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.725296 | orchestrator | 2026-03-05 00:58:20.725300 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-05 00:58:20.725304 | orchestrator | Thursday 05 March 2026 00:55:39 +0000 (0:00:00.528) 0:04:03.199 ******** 2026-03-05 00:58:20.725311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725325 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.725333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725342 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.725346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725354 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.725358 | orchestrator | 2026-03-05 00:58:20.725365 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-05 00:58:20.725370 | orchestrator | Thursday 05 March 2026 00:55:40 +0000 (0:00:00.833) 0:04:04.032 ******** 2026-03-05 00:58:20.725374 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.725378 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.725382 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.725386 | orchestrator | 2026-03-05 00:58:20.725390 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-05 00:58:20.725394 | orchestrator | Thursday 05 March 2026 00:55:42 +0000 (0:00:02.056) 0:04:06.089 ******** 2026-03-05 00:58:20.725398 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.725402 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.725407 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.725411 | orchestrator | 2026-03-05 00:58:20.725415 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-05 00:58:20.725419 | orchestrator | Thursday 05 March 2026 00:55:44 +0000 (0:00:01.824) 0:04:07.914 ******** 2026-03-05 00:58:20.725423 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.725428 | orchestrator | 2026-03-05 00:58:20.725432 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-05 00:58:20.725436 | orchestrator | Thursday 05 March 2026 00:55:46 +0000 (0:00:01.512) 0:04:09.427 ******** 2026-03-05 00:58:20.725441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.725462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.725481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.725501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725512 | orchestrator | 2026-03-05 00:58:20.725517 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-05 00:58:20.725521 | orchestrator | Thursday 05 March 2026 00:55:50 +0000 (0:00:04.321) 0:04:13.749 ******** 2026-03-05 00:58:20.725525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.725533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.725551 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.725556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725569 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.725573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.725578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.725588 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.725592 | orchestrator | 2026-03-05 00:58:20.725597 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-05 00:58:20.725601 | orchestrator | Thursday 05 March 2026 00:55:51 +0000 (0:00:01.256) 0:04:15.006 ******** 2026-03-05 00:58:20.725608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725631 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.725636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725653 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.725657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-05 00:58:20.725674 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.725678 | orchestrator | 2026-03-05 00:58:20.725682 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-05 00:58:20.725686 | orchestrator | Thursday 05 March 2026 00:55:52 +0000 (0:00:00.880) 0:04:15.886 ******** 2026-03-05 00:58:20.725690 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.725694 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.725699 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.725703 | orchestrator | 2026-03-05 00:58:20.725707 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-05 00:58:20.725711 | orchestrator | Thursday 05 March 2026 00:55:53 +0000 (0:00:01.474) 0:04:17.360 ******** 2026-03-05 00:58:20.725715 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.725719 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.725724 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.725728 | orchestrator | 2026-03-05 00:58:20.725734 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-05 00:58:20.725738 | orchestrator | Thursday 05 March 2026 00:55:56 +0000 (0:00:02.183) 0:04:19.544 ******** 2026-03-05 00:58:20.725742 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.725746 | orchestrator | 2026-03-05 00:58:20.725754 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-05 00:58:20.725758 | orchestrator | Thursday 05 March 2026 00:55:57 +0000 (0:00:01.516) 0:04:21.060 ******** 2026-03-05 00:58:20.725762 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-05 00:58:20.725767 | orchestrator | 2026-03-05 00:58:20.725771 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-05 00:58:20.725775 | orchestrator | Thursday 05 March 2026 00:55:58 +0000 (0:00:00.863) 0:04:21.924 ******** 2026-03-05 00:58:20.725782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-05 00:58:20.725787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-05 00:58:20.725791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-05 00:58:20.725796 | orchestrator | 2026-03-05 00:58:20.725800 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-05 00:58:20.725804 | orchestrator | Thursday 05 March 2026 00:56:03 +0000 (0:00:04.650) 0:04:26.574 ******** 2026-03-05 00:58:20.725808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:58:20.725813 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.725817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:58:20.725821 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.725826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:58:20.725833 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.725837 | orchestrator | 2026-03-05 00:58:20.725843 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-05 00:58:20.725847 | orchestrator | Thursday 05 March 2026 00:56:04 +0000 (0:00:01.081) 0:04:27.655 ******** 2026-03-05 00:58:20.725852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:58:20.725856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:58:20.725861 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.725865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:58:20.725871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:58:20.725876 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.725880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:58:20.725884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-05 00:58:20.725889 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.725893 | orchestrator | 2026-03-05 00:58:20.725897 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-05 00:58:20.725901 | orchestrator | Thursday 05 March 2026 00:56:05 +0000 (0:00:01.624) 0:04:29.280 ******** 2026-03-05 00:58:20.725905 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.725909 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.725913 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.725917 | orchestrator | 2026-03-05 00:58:20.725921 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-05 00:58:20.725926 | orchestrator | Thursday 05 March 2026 00:56:08 +0000 (0:00:02.743) 0:04:32.024 ******** 2026-03-05 00:58:20.725930 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.725934 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.725938 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.725942 | orchestrator | 2026-03-05 00:58:20.725946 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-05 00:58:20.725950 | orchestrator | Thursday 05 March 2026 00:56:11 +0000 (0:00:03.154) 0:04:35.178 ******** 2026-03-05 00:58:20.725954 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-05 00:58:20.725958 | orchestrator | 2026-03-05 00:58:20.725963 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-05 00:58:20.725967 | orchestrator | Thursday 05 March 2026 00:56:13 +0000 (0:00:01.480) 0:04:36.658 ******** 2026-03-05 00:58:20.725971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:58:20.725979 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.725983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:58:20.725987 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.725994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:58:20.725999 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726003 | orchestrator | 2026-03-05 00:58:20.726007 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-05 00:58:20.726039 | orchestrator | Thursday 05 March 2026 00:56:14 +0000 (0:00:01.261) 0:04:37.920 ******** 2026-03-05 00:58:20.726047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:58:20.726052 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:58:20.726061 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-05 00:58:20.726069 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726073 | orchestrator | 2026-03-05 00:58:20.726077 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-05 00:58:20.726095 | orchestrator | Thursday 05 March 2026 00:56:15 +0000 (0:00:01.341) 0:04:39.261 ******** 2026-03-05 00:58:20.726102 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726107 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726111 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726115 | orchestrator | 2026-03-05 00:58:20.726119 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-05 00:58:20.726123 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:01.952) 0:04:41.214 ******** 2026-03-05 00:58:20.726127 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.726132 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.726136 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.726140 | orchestrator | 2026-03-05 00:58:20.726144 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-05 00:58:20.726148 | orchestrator | Thursday 05 March 2026 00:56:20 +0000 (0:00:02.382) 0:04:43.596 ******** 2026-03-05 00:58:20.726153 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.726157 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.726161 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.726165 | orchestrator | 2026-03-05 00:58:20.726169 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-05 00:58:20.726173 | orchestrator | Thursday 05 March 2026 00:56:23 +0000 (0:00:03.195) 0:04:46.791 ******** 2026-03-05 00:58:20.726178 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-05 00:58:20.726182 | orchestrator | 2026-03-05 00:58:20.726186 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-05 00:58:20.726190 | orchestrator | Thursday 05 March 2026 00:56:24 +0000 (0:00:00.902) 0:04:47.694 ******** 2026-03-05 00:58:20.726197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:58:20.726202 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:58:20.726211 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:58:20.726222 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726226 | orchestrator | 2026-03-05 00:58:20.726231 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-05 00:58:20.726235 | orchestrator | Thursday 05 March 2026 00:56:25 +0000 (0:00:01.370) 0:04:49.065 ******** 2026-03-05 00:58:20.726239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:58:20.726247 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:58:20.726255 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-05 00:58:20.726264 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726268 | orchestrator | 2026-03-05 00:58:20.726272 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-05 00:58:20.726276 | orchestrator | Thursday 05 March 2026 00:56:27 +0000 (0:00:01.395) 0:04:50.461 ******** 2026-03-05 00:58:20.726281 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726285 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726289 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726293 | orchestrator | 2026-03-05 00:58:20.726297 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-05 00:58:20.726301 | orchestrator | Thursday 05 March 2026 00:56:28 +0000 (0:00:01.768) 0:04:52.229 ******** 2026-03-05 00:58:20.726305 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.726309 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.726314 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.726318 | orchestrator | 2026-03-05 00:58:20.726322 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-05 00:58:20.726326 | orchestrator | Thursday 05 March 2026 00:56:31 +0000 (0:00:02.798) 0:04:55.027 ******** 2026-03-05 00:58:20.726330 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.726334 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.726338 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.726342 | orchestrator | 2026-03-05 00:58:20.726346 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-05 00:58:20.726351 | orchestrator | Thursday 05 March 2026 00:56:35 +0000 (0:00:03.566) 0:04:58.594 ******** 2026-03-05 00:58:20.726362 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.726367 | orchestrator | 2026-03-05 00:58:20.726371 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-05 00:58:20.726375 | orchestrator | Thursday 05 March 2026 00:56:36 +0000 (0:00:01.623) 0:05:00.217 ******** 2026-03-05 00:58:20.726382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.726389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:58:20.726394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.726410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.726417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:58:20.726424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.726438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.726444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:58:20.726448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.726469 | orchestrator | 2026-03-05 00:58:20.726473 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-05 00:58:20.726477 | orchestrator | Thursday 05 March 2026 00:56:40 +0000 (0:00:03.762) 0:05:03.980 ******** 2026-03-05 00:58:20.726482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.726486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:58:20.726493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.726513 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.726522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:58:20.726526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.726544 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.726555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 00:58:20.726560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 00:58:20.726571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 00:58:20.726607 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726612 | orchestrator | 2026-03-05 00:58:20.726616 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-05 00:58:20.726620 | orchestrator | Thursday 05 March 2026 00:56:41 +0000 (0:00:00.767) 0:05:04.747 ******** 2026-03-05 00:58:20.726624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:58:20.726629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:58:20.726633 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:58:20.726644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:58:20.726649 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:58:20.726657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-05 00:58:20.726661 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726666 | orchestrator | 2026-03-05 00:58:20.726670 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-05 00:58:20.726674 | orchestrator | Thursday 05 March 2026 00:56:42 +0000 (0:00:01.548) 0:05:06.295 ******** 2026-03-05 00:58:20.726678 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.726682 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.726686 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.726691 | orchestrator | 2026-03-05 00:58:20.726695 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-05 00:58:20.726699 | orchestrator | Thursday 05 March 2026 00:56:44 +0000 (0:00:01.535) 0:05:07.831 ******** 2026-03-05 00:58:20.726703 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.726707 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.726711 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.726715 | orchestrator | 2026-03-05 00:58:20.726719 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-05 00:58:20.726724 | orchestrator | Thursday 05 March 2026 00:56:46 +0000 (0:00:02.147) 0:05:09.979 ******** 2026-03-05 00:58:20.726728 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.726732 | orchestrator | 2026-03-05 00:58:20.726736 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-05 00:58:20.726740 | orchestrator | Thursday 05 March 2026 00:56:47 +0000 (0:00:01.336) 0:05:11.316 ******** 2026-03-05 00:58:20.726745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:58:20.726755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:58:20.726762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 00:58:20.726768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:58:20.726773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:58:20.726784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 00:58:20.726789 | orchestrator | 2026-03-05 00:58:20.726793 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-05 00:58:20.726797 | orchestrator | Thursday 05 March 2026 00:56:53 +0000 (0:00:05.488) 0:05:16.804 ******** 2026-03-05 00:58:20.726804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:58:20.726809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:58:20.726814 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:58:20.726828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:58:20.726833 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 00:58:20.726844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 00:58:20.726849 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726853 | orchestrator | 2026-03-05 00:58:20.726857 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-05 00:58:20.726865 | orchestrator | Thursday 05 March 2026 00:56:54 +0000 (0:00:00.730) 0:05:17.534 ******** 2026-03-05 00:58:20.726869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-05 00:58:20.726873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:58:20.726878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:58:20.726883 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-05 00:58:20.726891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:58:20.726896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:58:20.726900 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-05 00:58:20.726911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:58:20.726915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-05 00:58:20.726919 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726924 | orchestrator | 2026-03-05 00:58:20.726928 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-05 00:58:20.726932 | orchestrator | Thursday 05 March 2026 00:56:55 +0000 (0:00:01.011) 0:05:18.546 ******** 2026-03-05 00:58:20.726936 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726940 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726944 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726949 | orchestrator | 2026-03-05 00:58:20.726953 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-05 00:58:20.726957 | orchestrator | Thursday 05 March 2026 00:56:56 +0000 (0:00:01.163) 0:05:19.710 ******** 2026-03-05 00:58:20.726961 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.726968 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.726972 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.726976 | orchestrator | 2026-03-05 00:58:20.726981 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-05 00:58:20.726985 | orchestrator | Thursday 05 March 2026 00:56:57 +0000 (0:00:01.468) 0:05:21.179 ******** 2026-03-05 00:58:20.726989 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.726993 | orchestrator | 2026-03-05 00:58:20.726997 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-05 00:58:20.727005 | orchestrator | Thursday 05 March 2026 00:56:59 +0000 (0:00:01.475) 0:05:22.655 ******** 2026-03-05 00:58:20.727010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 00:58:20.727018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:58:20.727023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 00:58:20.727044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 00:58:20.727051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:58:20.727055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:58:20.727071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 00:58:20.727148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:58:20.727155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 00:58:20.727180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:58:20.727184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 00:58:20.727215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:58:20.727219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727232 | orchestrator | 2026-03-05 00:58:20.727239 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-05 00:58:20.727243 | orchestrator | Thursday 05 March 2026 00:57:04 +0000 (0:00:04.773) 0:05:27.428 ******** 2026-03-05 00:58:20.727248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 00:58:20.727258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:58:20.727263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 00:58:20.727283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:58:20.727293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 00:58:20.727302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:58:20.727311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727365 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 00:58:20.727386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:58:20.727390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727411 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 00:58:20.727435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 00:58:20.727440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 00:58:20.727466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-05 00:58:20.727471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 00:58:20.727480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 00:58:20.727484 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.727488 | orchestrator | 2026-03-05 00:58:20.727493 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-05 00:58:20.727497 | orchestrator | Thursday 05 March 2026 00:57:05 +0000 (0:00:01.261) 0:05:28.690 ******** 2026-03-05 00:58:20.727501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-05 00:58:20.727506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-05 00:58:20.727514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:58:20.727519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:58:20.727523 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-05 00:58:20.727534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-05 00:58:20.727538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:58:20.727545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:58:20.727549 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-05 00:58:20.727557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-05 00:58:20.727561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:58:20.727565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-05 00:58:20.727569 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.727572 | orchestrator | 2026-03-05 00:58:20.727576 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-05 00:58:20.727580 | orchestrator | Thursday 05 March 2026 00:57:06 +0000 (0:00:01.049) 0:05:29.739 ******** 2026-03-05 00:58:20.727584 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727588 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727592 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.727595 | orchestrator | 2026-03-05 00:58:20.727599 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-05 00:58:20.727603 | orchestrator | Thursday 05 March 2026 00:57:06 +0000 (0:00:00.475) 0:05:30.215 ******** 2026-03-05 00:58:20.727607 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727613 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727617 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.727621 | orchestrator | 2026-03-05 00:58:20.727625 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-05 00:58:20.727629 | orchestrator | Thursday 05 March 2026 00:57:08 +0000 (0:00:01.524) 0:05:31.739 ******** 2026-03-05 00:58:20.727633 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.727637 | orchestrator | 2026-03-05 00:58:20.727640 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-05 00:58:20.727644 | orchestrator | Thursday 05 March 2026 00:57:10 +0000 (0:00:01.774) 0:05:33.514 ******** 2026-03-05 00:58:20.727650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:58:20.727657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:58:20.727661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-05 00:58:20.727666 | orchestrator | 2026-03-05 00:58:20.727670 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-05 00:58:20.727673 | orchestrator | Thursday 05 March 2026 00:57:12 +0000 (0:00:02.550) 0:05:36.065 ******** 2026-03-05 00:58:20.727681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-05 00:58:20.727685 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-05 00:58:20.727695 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-05 00:58:20.727706 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.727709 | orchestrator | 2026-03-05 00:58:20.727713 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-05 00:58:20.727717 | orchestrator | Thursday 05 March 2026 00:57:13 +0000 (0:00:00.436) 0:05:36.501 ******** 2026-03-05 00:58:20.727721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-05 00:58:20.727725 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-05 00:58:20.727733 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-05 00:58:20.727756 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.727760 | orchestrator | 2026-03-05 00:58:20.727764 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-05 00:58:20.727768 | orchestrator | Thursday 05 March 2026 00:57:14 +0000 (0:00:01.057) 0:05:37.558 ******** 2026-03-05 00:58:20.727772 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727775 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727779 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.727783 | orchestrator | 2026-03-05 00:58:20.727787 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-05 00:58:20.727791 | orchestrator | Thursday 05 March 2026 00:57:14 +0000 (0:00:00.456) 0:05:38.015 ******** 2026-03-05 00:58:20.727794 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727798 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727802 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.727806 | orchestrator | 2026-03-05 00:58:20.727810 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-05 00:58:20.727813 | orchestrator | Thursday 05 March 2026 00:57:16 +0000 (0:00:01.445) 0:05:39.460 ******** 2026-03-05 00:58:20.727817 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 00:58:20.727821 | orchestrator | 2026-03-05 00:58:20.727825 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-05 00:58:20.727829 | orchestrator | Thursday 05 March 2026 00:57:17 +0000 (0:00:01.840) 0:05:41.301 ******** 2026-03-05 00:58:20.727833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.727839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.727846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.727853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.727858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.727864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-05 00:58:20.727868 | orchestrator | 2026-03-05 00:58:20.727872 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-05 00:58:20.727876 | orchestrator | Thursday 05 March 2026 00:57:24 +0000 (0:00:06.444) 0:05:47.745 ******** 2026-03-05 00:58:20.727882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.727890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.727894 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.727903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.727908 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.727923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-05 00:58:20.727927 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.727931 | orchestrator | 2026-03-05 00:58:20.727934 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-05 00:58:20.727938 | orchestrator | Thursday 05 March 2026 00:57:24 +0000 (0:00:00.647) 0:05:48.393 ******** 2026-03-05 00:58:20.727942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727959 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.727963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727980 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.727984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-05 00:58:20.727996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:58:20.728002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-05 00:58:20.728006 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728010 | orchestrator | 2026-03-05 00:58:20.728014 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-05 00:58:20.728018 | orchestrator | Thursday 05 March 2026 00:57:26 +0000 (0:00:01.806) 0:05:50.199 ******** 2026-03-05 00:58:20.728022 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.728026 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.728030 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.728033 | orchestrator | 2026-03-05 00:58:20.728037 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-05 00:58:20.728041 | orchestrator | Thursday 05 March 2026 00:57:28 +0000 (0:00:01.463) 0:05:51.662 ******** 2026-03-05 00:58:20.728045 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.728049 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.728053 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.728056 | orchestrator | 2026-03-05 00:58:20.728060 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-05 00:58:20.728064 | orchestrator | Thursday 05 March 2026 00:57:30 +0000 (0:00:02.460) 0:05:54.122 ******** 2026-03-05 00:58:20.728068 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728072 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728076 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728091 | orchestrator | 2026-03-05 00:58:20.728095 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-05 00:58:20.728099 | orchestrator | Thursday 05 March 2026 00:57:31 +0000 (0:00:00.362) 0:05:54.485 ******** 2026-03-05 00:58:20.728103 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728107 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728110 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728114 | orchestrator | 2026-03-05 00:58:20.728118 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-05 00:58:20.728122 | orchestrator | Thursday 05 March 2026 00:57:31 +0000 (0:00:00.341) 0:05:54.827 ******** 2026-03-05 00:58:20.728126 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728129 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728133 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728137 | orchestrator | 2026-03-05 00:58:20.728141 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-05 00:58:20.728145 | orchestrator | Thursday 05 March 2026 00:57:32 +0000 (0:00:00.765) 0:05:55.592 ******** 2026-03-05 00:58:20.728148 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728152 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728156 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728160 | orchestrator | 2026-03-05 00:58:20.728164 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-05 00:58:20.728167 | orchestrator | Thursday 05 March 2026 00:57:32 +0000 (0:00:00.351) 0:05:55.944 ******** 2026-03-05 00:58:20.728171 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728175 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728179 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728186 | orchestrator | 2026-03-05 00:58:20.728190 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-05 00:58:20.728194 | orchestrator | Thursday 05 March 2026 00:57:32 +0000 (0:00:00.351) 0:05:56.296 ******** 2026-03-05 00:58:20.728197 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728201 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728205 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728209 | orchestrator | 2026-03-05 00:58:20.728213 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-05 00:58:20.728217 | orchestrator | Thursday 05 March 2026 00:57:33 +0000 (0:00:01.003) 0:05:57.299 ******** 2026-03-05 00:58:20.728220 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.728224 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.728228 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.728232 | orchestrator | 2026-03-05 00:58:20.728236 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-05 00:58:20.728240 | orchestrator | Thursday 05 March 2026 00:57:34 +0000 (0:00:00.746) 0:05:58.046 ******** 2026-03-05 00:58:20.728244 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.728247 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.728251 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.728255 | orchestrator | 2026-03-05 00:58:20.728259 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-05 00:58:20.728263 | orchestrator | Thursday 05 March 2026 00:57:35 +0000 (0:00:00.388) 0:05:58.434 ******** 2026-03-05 00:58:20.728267 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.728271 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.728274 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.728278 | orchestrator | 2026-03-05 00:58:20.728284 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-05 00:58:20.728288 | orchestrator | Thursday 05 March 2026 00:57:36 +0000 (0:00:00.989) 0:05:59.424 ******** 2026-03-05 00:58:20.728292 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.728296 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.728300 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.728303 | orchestrator | 2026-03-05 00:58:20.728307 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-05 00:58:20.728311 | orchestrator | Thursday 05 March 2026 00:57:37 +0000 (0:00:01.262) 0:06:00.686 ******** 2026-03-05 00:58:20.728315 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.728319 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.728323 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.728326 | orchestrator | 2026-03-05 00:58:20.728330 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-05 00:58:20.728334 | orchestrator | Thursday 05 March 2026 00:57:38 +0000 (0:00:01.089) 0:06:01.776 ******** 2026-03-05 00:58:20.728338 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.728342 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.728346 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.728349 | orchestrator | 2026-03-05 00:58:20.728353 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-05 00:58:20.728357 | orchestrator | Thursday 05 March 2026 00:57:48 +0000 (0:00:09.876) 0:06:11.652 ******** 2026-03-05 00:58:20.728364 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.728368 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.728372 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.728375 | orchestrator | 2026-03-05 00:58:20.728379 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-05 00:58:20.728383 | orchestrator | Thursday 05 March 2026 00:57:49 +0000 (0:00:00.848) 0:06:12.501 ******** 2026-03-05 00:58:20.728387 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.728391 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.728394 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.728398 | orchestrator | 2026-03-05 00:58:20.728402 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-05 00:58:20.728408 | orchestrator | Thursday 05 March 2026 00:58:04 +0000 (0:00:15.096) 0:06:27.598 ******** 2026-03-05 00:58:20.728412 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.728416 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.728420 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.728424 | orchestrator | 2026-03-05 00:58:20.728427 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-05 00:58:20.728431 | orchestrator | Thursday 05 March 2026 00:58:05 +0000 (0:00:01.168) 0:06:28.766 ******** 2026-03-05 00:58:20.728435 | orchestrator | changed: [testbed-node-0] 2026-03-05 00:58:20.728439 | orchestrator | changed: [testbed-node-2] 2026-03-05 00:58:20.728443 | orchestrator | changed: [testbed-node-1] 2026-03-05 00:58:20.728446 | orchestrator | 2026-03-05 00:58:20.728450 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-05 00:58:20.728454 | orchestrator | Thursday 05 March 2026 00:58:09 +0000 (0:00:04.413) 0:06:33.179 ******** 2026-03-05 00:58:20.728458 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728462 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728466 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728469 | orchestrator | 2026-03-05 00:58:20.728473 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-05 00:58:20.728477 | orchestrator | Thursday 05 March 2026 00:58:10 +0000 (0:00:00.366) 0:06:33.546 ******** 2026-03-05 00:58:20.728481 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728485 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728489 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728503 | orchestrator | 2026-03-05 00:58:20.728507 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-05 00:58:20.728510 | orchestrator | Thursday 05 March 2026 00:58:10 +0000 (0:00:00.449) 0:06:33.995 ******** 2026-03-05 00:58:20.728514 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728518 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728522 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728526 | orchestrator | 2026-03-05 00:58:20.728530 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-05 00:58:20.728533 | orchestrator | Thursday 05 March 2026 00:58:11 +0000 (0:00:00.715) 0:06:34.711 ******** 2026-03-05 00:58:20.728537 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728541 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728545 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728548 | orchestrator | 2026-03-05 00:58:20.728552 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-05 00:58:20.728556 | orchestrator | Thursday 05 March 2026 00:58:11 +0000 (0:00:00.368) 0:06:35.079 ******** 2026-03-05 00:58:20.728560 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728563 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728567 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728571 | orchestrator | 2026-03-05 00:58:20.728575 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-05 00:58:20.728578 | orchestrator | Thursday 05 March 2026 00:58:12 +0000 (0:00:00.363) 0:06:35.443 ******** 2026-03-05 00:58:20.728582 | orchestrator | skipping: [testbed-node-0] 2026-03-05 00:58:20.728586 | orchestrator | skipping: [testbed-node-1] 2026-03-05 00:58:20.728590 | orchestrator | skipping: [testbed-node-2] 2026-03-05 00:58:20.728593 | orchestrator | 2026-03-05 00:58:20.728597 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-05 00:58:20.728601 | orchestrator | Thursday 05 March 2026 00:58:12 +0000 (0:00:00.357) 0:06:35.801 ******** 2026-03-05 00:58:20.728605 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.728608 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.728612 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.728616 | orchestrator | 2026-03-05 00:58:20.728620 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-05 00:58:20.728623 | orchestrator | Thursday 05 March 2026 00:58:17 +0000 (0:00:05.222) 0:06:41.023 ******** 2026-03-05 00:58:20.728631 | orchestrator | ok: [testbed-node-0] 2026-03-05 00:58:20.728635 | orchestrator | ok: [testbed-node-1] 2026-03-05 00:58:20.728638 | orchestrator | ok: [testbed-node-2] 2026-03-05 00:58:20.728642 | orchestrator | 2026-03-05 00:58:20.728646 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 00:58:20.728652 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-05 00:58:20.728656 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-05 00:58:20.728660 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-05 00:58:20.728664 | orchestrator | 2026-03-05 00:58:20.728668 | orchestrator | 2026-03-05 00:58:20.728672 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 00:58:20.728676 | orchestrator | Thursday 05 March 2026 00:58:18 +0000 (0:00:00.862) 0:06:41.885 ******** 2026-03-05 00:58:20.728679 | orchestrator | =============================================================================== 2026-03-05 00:58:20.728683 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.10s 2026-03-05 00:58:20.728687 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.88s 2026-03-05 00:58:20.728693 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.44s 2026-03-05 00:58:20.728697 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.77s 2026-03-05 00:58:20.728701 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.49s 2026-03-05 00:58:20.728704 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.39s 2026-03-05 00:58:20.728708 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.25s 2026-03-05 00:58:20.728712 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.22s 2026-03-05 00:58:20.728716 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.77s 2026-03-05 00:58:20.728719 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.65s 2026-03-05 00:58:20.728723 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.50s 2026-03-05 00:58:20.728727 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.49s 2026-03-05 00:58:20.728731 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.42s 2026-03-05 00:58:20.728734 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.41s 2026-03-05 00:58:20.728738 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.40s 2026-03-05 00:58:20.728742 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.34s 2026-03-05 00:58:20.728746 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.32s 2026-03-05 00:58:20.728749 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.26s 2026-03-05 00:58:20.728753 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.99s 2026-03-05 00:58:20.728757 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.96s 2026-03-05 00:58:20.728761 | orchestrator | 2026-03-05 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:23.759502 | orchestrator | 2026-03-05 00:58:23 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:23.761928 | orchestrator | 2026-03-05 00:58:23 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:23.761987 | orchestrator | 2026-03-05 00:58:23 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:23.761995 | orchestrator | 2026-03-05 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:26.804264 | orchestrator | 2026-03-05 00:58:26 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:26.805731 | orchestrator | 2026-03-05 00:58:26 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:26.809424 | orchestrator | 2026-03-05 00:58:26 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:26.809494 | orchestrator | 2026-03-05 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:29.863497 | orchestrator | 2026-03-05 00:58:29 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:29.864856 | orchestrator | 2026-03-05 00:58:29 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:29.865834 | orchestrator | 2026-03-05 00:58:29 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:29.865882 | orchestrator | 2026-03-05 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:32.895781 | orchestrator | 2026-03-05 00:58:32 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:32.896019 | orchestrator | 2026-03-05 00:58:32 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:32.897444 | orchestrator | 2026-03-05 00:58:32 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:32.897506 | orchestrator | 2026-03-05 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:35.968356 | orchestrator | 2026-03-05 00:58:35 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:35.969083 | orchestrator | 2026-03-05 00:58:35 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:35.969928 | orchestrator | 2026-03-05 00:58:35 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:35.971913 | orchestrator | 2026-03-05 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:39.013214 | orchestrator | 2026-03-05 00:58:39 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:39.013530 | orchestrator | 2026-03-05 00:58:39 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:39.014783 | orchestrator | 2026-03-05 00:58:39 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:39.014857 | orchestrator | 2026-03-05 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:42.058897 | orchestrator | 2026-03-05 00:58:42 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:42.059232 | orchestrator | 2026-03-05 00:58:42 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:42.060732 | orchestrator | 2026-03-05 00:58:42 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:42.060781 | orchestrator | 2026-03-05 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:45.115250 | orchestrator | 2026-03-05 00:58:45 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:45.116505 | orchestrator | 2026-03-05 00:58:45 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:45.118893 | orchestrator | 2026-03-05 00:58:45 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:45.118996 | orchestrator | 2026-03-05 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:48.155623 | orchestrator | 2026-03-05 00:58:48 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:48.155784 | orchestrator | 2026-03-05 00:58:48 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:48.156449 | orchestrator | 2026-03-05 00:58:48 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:48.156528 | orchestrator | 2026-03-05 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:51.202153 | orchestrator | 2026-03-05 00:58:51 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:51.202958 | orchestrator | 2026-03-05 00:58:51 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:51.203929 | orchestrator | 2026-03-05 00:58:51 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:51.204084 | orchestrator | 2026-03-05 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:54.256929 | orchestrator | 2026-03-05 00:58:54 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:54.259453 | orchestrator | 2026-03-05 00:58:54 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:54.263564 | orchestrator | 2026-03-05 00:58:54 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:54.263631 | orchestrator | 2026-03-05 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:58:57.309763 | orchestrator | 2026-03-05 00:58:57 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:58:57.311407 | orchestrator | 2026-03-05 00:58:57 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:58:57.313433 | orchestrator | 2026-03-05 00:58:57 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:58:57.313703 | orchestrator | 2026-03-05 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:00.357925 | orchestrator | 2026-03-05 00:59:00 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:00.359648 | orchestrator | 2026-03-05 00:59:00 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:00.361186 | orchestrator | 2026-03-05 00:59:00 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:00.361235 | orchestrator | 2026-03-05 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:03.408398 | orchestrator | 2026-03-05 00:59:03 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:03.412250 | orchestrator | 2026-03-05 00:59:03 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:03.412312 | orchestrator | 2026-03-05 00:59:03 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:03.412402 | orchestrator | 2026-03-05 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:06.456261 | orchestrator | 2026-03-05 00:59:06 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:06.457713 | orchestrator | 2026-03-05 00:59:06 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:06.459340 | orchestrator | 2026-03-05 00:59:06 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:06.460091 | orchestrator | 2026-03-05 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:09.499633 | orchestrator | 2026-03-05 00:59:09 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:09.500048 | orchestrator | 2026-03-05 00:59:09 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:09.501442 | orchestrator | 2026-03-05 00:59:09 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:09.501472 | orchestrator | 2026-03-05 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:12.548874 | orchestrator | 2026-03-05 00:59:12 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:12.551580 | orchestrator | 2026-03-05 00:59:12 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:12.553238 | orchestrator | 2026-03-05 00:59:12 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:12.553304 | orchestrator | 2026-03-05 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:15.602654 | orchestrator | 2026-03-05 00:59:15 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:15.602832 | orchestrator | 2026-03-05 00:59:15 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:15.604913 | orchestrator | 2026-03-05 00:59:15 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:15.604973 | orchestrator | 2026-03-05 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:18.656289 | orchestrator | 2026-03-05 00:59:18 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:18.659027 | orchestrator | 2026-03-05 00:59:18 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:18.661729 | orchestrator | 2026-03-05 00:59:18 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:18.663015 | orchestrator | 2026-03-05 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:21.704969 | orchestrator | 2026-03-05 00:59:21 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:21.705788 | orchestrator | 2026-03-05 00:59:21 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:21.706595 | orchestrator | 2026-03-05 00:59:21 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:21.706638 | orchestrator | 2026-03-05 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:24.748695 | orchestrator | 2026-03-05 00:59:24 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:24.749725 | orchestrator | 2026-03-05 00:59:24 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:24.751065 | orchestrator | 2026-03-05 00:59:24 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:24.751228 | orchestrator | 2026-03-05 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:27.792751 | orchestrator | 2026-03-05 00:59:27 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:27.792943 | orchestrator | 2026-03-05 00:59:27 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:27.796188 | orchestrator | 2026-03-05 00:59:27 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:27.796275 | orchestrator | 2026-03-05 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:30.844063 | orchestrator | 2026-03-05 00:59:30 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:30.846616 | orchestrator | 2026-03-05 00:59:30 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:30.849001 | orchestrator | 2026-03-05 00:59:30 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:30.849270 | orchestrator | 2026-03-05 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:33.891678 | orchestrator | 2026-03-05 00:59:33 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:33.892097 | orchestrator | 2026-03-05 00:59:33 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:33.893568 | orchestrator | 2026-03-05 00:59:33 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:33.894949 | orchestrator | 2026-03-05 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:36.936717 | orchestrator | 2026-03-05 00:59:36 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:36.937876 | orchestrator | 2026-03-05 00:59:36 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:36.943087 | orchestrator | 2026-03-05 00:59:36 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:36.943237 | orchestrator | 2026-03-05 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:39.984647 | orchestrator | 2026-03-05 00:59:39 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:39.984870 | orchestrator | 2026-03-05 00:59:39 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:39.986102 | orchestrator | 2026-03-05 00:59:39 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:39.986558 | orchestrator | 2026-03-05 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:43.038324 | orchestrator | 2026-03-05 00:59:43 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:43.042003 | orchestrator | 2026-03-05 00:59:43 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:43.045392 | orchestrator | 2026-03-05 00:59:43 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:43.045453 | orchestrator | 2026-03-05 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:46.096925 | orchestrator | 2026-03-05 00:59:46 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:46.101289 | orchestrator | 2026-03-05 00:59:46 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:46.105686 | orchestrator | 2026-03-05 00:59:46 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:46.106308 | orchestrator | 2026-03-05 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:49.156271 | orchestrator | 2026-03-05 00:59:49 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:49.156649 | orchestrator | 2026-03-05 00:59:49 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:49.157881 | orchestrator | 2026-03-05 00:59:49 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:49.157907 | orchestrator | 2026-03-05 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:52.211262 | orchestrator | 2026-03-05 00:59:52 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:52.211844 | orchestrator | 2026-03-05 00:59:52 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:52.212414 | orchestrator | 2026-03-05 00:59:52 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:52.212504 | orchestrator | 2026-03-05 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:55.264283 | orchestrator | 2026-03-05 00:59:55 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:55.266564 | orchestrator | 2026-03-05 00:59:55 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:55.266643 | orchestrator | 2026-03-05 00:59:55 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:55.266734 | orchestrator | 2026-03-05 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 00:59:58.310571 | orchestrator | 2026-03-05 00:59:58 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 00:59:58.310948 | orchestrator | 2026-03-05 00:59:58 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 00:59:58.312085 | orchestrator | 2026-03-05 00:59:58 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 00:59:58.312214 | orchestrator | 2026-03-05 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:01.350770 | orchestrator | 2026-03-05 01:00:01 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:01.352809 | orchestrator | 2026-03-05 01:00:01 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:01.355285 | orchestrator | 2026-03-05 01:00:01 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:01.355360 | orchestrator | 2026-03-05 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:04.397091 | orchestrator | 2026-03-05 01:00:04 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:04.398739 | orchestrator | 2026-03-05 01:00:04 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:04.400300 | orchestrator | 2026-03-05 01:00:04 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:04.400375 | orchestrator | 2026-03-05 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:07.448198 | orchestrator | 2026-03-05 01:00:07 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:07.451112 | orchestrator | 2026-03-05 01:00:07 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:07.453700 | orchestrator | 2026-03-05 01:00:07 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:07.453752 | orchestrator | 2026-03-05 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:10.498675 | orchestrator | 2026-03-05 01:00:10 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:10.501733 | orchestrator | 2026-03-05 01:00:10 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:10.503418 | orchestrator | 2026-03-05 01:00:10 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:10.503457 | orchestrator | 2026-03-05 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:13.554495 | orchestrator | 2026-03-05 01:00:13 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:13.555848 | orchestrator | 2026-03-05 01:00:13 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:13.557250 | orchestrator | 2026-03-05 01:00:13 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:13.557310 | orchestrator | 2026-03-05 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:16.614071 | orchestrator | 2026-03-05 01:00:16 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:16.615625 | orchestrator | 2026-03-05 01:00:16 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:16.617324 | orchestrator | 2026-03-05 01:00:16 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:16.617370 | orchestrator | 2026-03-05 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:19.665825 | orchestrator | 2026-03-05 01:00:19 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:19.668193 | orchestrator | 2026-03-05 01:00:19 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:19.670066 | orchestrator | 2026-03-05 01:00:19 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:19.670123 | orchestrator | 2026-03-05 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:22.733385 | orchestrator | 2026-03-05 01:00:22 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:22.734705 | orchestrator | 2026-03-05 01:00:22 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:22.736863 | orchestrator | 2026-03-05 01:00:22 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:22.736929 | orchestrator | 2026-03-05 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:25.802308 | orchestrator | 2026-03-05 01:00:25 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:25.804837 | orchestrator | 2026-03-05 01:00:25 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:25.806518 | orchestrator | 2026-03-05 01:00:25 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:25.806576 | orchestrator | 2026-03-05 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:28.863158 | orchestrator | 2026-03-05 01:00:28 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:28.866721 | orchestrator | 2026-03-05 01:00:28 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:28.870360 | orchestrator | 2026-03-05 01:00:28 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:28.870453 | orchestrator | 2026-03-05 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:31.926961 | orchestrator | 2026-03-05 01:00:31 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:31.928117 | orchestrator | 2026-03-05 01:00:31 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:31.929697 | orchestrator | 2026-03-05 01:00:31 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:31.929734 | orchestrator | 2026-03-05 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:34.969862 | orchestrator | 2026-03-05 01:00:34 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:34.972607 | orchestrator | 2026-03-05 01:00:34 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:34.974284 | orchestrator | 2026-03-05 01:00:34 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:34.974840 | orchestrator | 2026-03-05 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:38.022755 | orchestrator | 2026-03-05 01:00:38 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:38.026837 | orchestrator | 2026-03-05 01:00:38 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:38.028757 | orchestrator | 2026-03-05 01:00:38 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:38.029209 | orchestrator | 2026-03-05 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:41.074679 | orchestrator | 2026-03-05 01:00:41 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:41.082734 | orchestrator | 2026-03-05 01:00:41 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:41.084640 | orchestrator | 2026-03-05 01:00:41 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:41.084962 | orchestrator | 2026-03-05 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:44.138383 | orchestrator | 2026-03-05 01:00:44 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:44.139528 | orchestrator | 2026-03-05 01:00:44 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:44.141732 | orchestrator | 2026-03-05 01:00:44 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state STARTED 2026-03-05 01:00:44.141957 | orchestrator | 2026-03-05 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:47.196328 | orchestrator | 2026-03-05 01:00:47 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:00:47.198243 | orchestrator | 2026-03-05 01:00:47 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:47.199974 | orchestrator | 2026-03-05 01:00:47 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:47.206151 | orchestrator | 2026-03-05 01:00:47 | INFO  | Task 5ff43d0b-255e-4ea5-914c-69ec014f88b5 is in state SUCCESS 2026-03-05 01:00:47.208242 | orchestrator | 2026-03-05 01:00:47.208293 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 01:00:47.208302 | orchestrator | 2.16.14 2026-03-05 01:00:47.208310 | orchestrator | 2026-03-05 01:00:47.208316 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-05 01:00:47.208323 | orchestrator | 2026-03-05 01:00:47.208328 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-05 01:00:47.208332 | orchestrator | Thursday 05 March 2026 00:49:01 +0000 (0:00:00.878) 0:00:00.878 ******** 2026-03-05 01:00:47.208338 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.208343 | orchestrator | 2026-03-05 01:00:47.208347 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-05 01:00:47.208351 | orchestrator | Thursday 05 March 2026 00:49:02 +0000 (0:00:01.187) 0:00:02.066 ******** 2026-03-05 01:00:47.208355 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.208360 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.208364 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.208368 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.208372 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.208375 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.208379 | orchestrator | 2026-03-05 01:00:47.208383 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-05 01:00:47.208387 | orchestrator | Thursday 05 March 2026 00:49:04 +0000 (0:00:01.585) 0:00:03.652 ******** 2026-03-05 01:00:47.208391 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.208414 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.208418 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.208422 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.208443 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.208448 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.208451 | orchestrator | 2026-03-05 01:00:47.208455 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-05 01:00:47.208459 | orchestrator | Thursday 05 March 2026 00:49:05 +0000 (0:00:00.861) 0:00:04.513 ******** 2026-03-05 01:00:47.208463 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.208467 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.208471 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.208474 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.208478 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.208482 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.208486 | orchestrator | 2026-03-05 01:00:47.208490 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-05 01:00:47.208494 | orchestrator | Thursday 05 March 2026 00:49:06 +0000 (0:00:01.076) 0:00:05.590 ******** 2026-03-05 01:00:47.208498 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.208501 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.208505 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.208509 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.208513 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.208517 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.208520 | orchestrator | 2026-03-05 01:00:47.208524 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-05 01:00:47.208528 | orchestrator | Thursday 05 March 2026 00:49:06 +0000 (0:00:00.716) 0:00:06.307 ******** 2026-03-05 01:00:47.208532 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.208536 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.208540 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.208544 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.208548 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.208552 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.208556 | orchestrator | 2026-03-05 01:00:47.208560 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-05 01:00:47.208563 | orchestrator | Thursday 05 March 2026 00:49:07 +0000 (0:00:00.545) 0:00:06.852 ******** 2026-03-05 01:00:47.208567 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.208571 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.208575 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.208579 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.208583 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.208587 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.208590 | orchestrator | 2026-03-05 01:00:47.208594 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-05 01:00:47.208598 | orchestrator | Thursday 05 March 2026 00:49:08 +0000 (0:00:00.859) 0:00:07.712 ******** 2026-03-05 01:00:47.208602 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.208607 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.208611 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.208615 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.208619 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.208622 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.208626 | orchestrator | 2026-03-05 01:00:47.208630 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-05 01:00:47.208634 | orchestrator | Thursday 05 March 2026 00:49:09 +0000 (0:00:00.864) 0:00:08.576 ******** 2026-03-05 01:00:47.208638 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.208642 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.208646 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.208649 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.208653 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.208657 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.208661 | orchestrator | 2026-03-05 01:00:47.208668 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-05 01:00:47.208672 | orchestrator | Thursday 05 March 2026 00:49:10 +0000 (0:00:01.454) 0:00:10.031 ******** 2026-03-05 01:00:47.208676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:00:47.208680 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:47.208684 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:47.208687 | orchestrator | 2026-03-05 01:00:47.208691 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-05 01:00:47.208727 | orchestrator | Thursday 05 March 2026 00:49:12 +0000 (0:00:01.557) 0:00:11.589 ******** 2026-03-05 01:00:47.208733 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.208739 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.208745 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.208763 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.208770 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.208776 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.208782 | orchestrator | 2026-03-05 01:00:47.208788 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-05 01:00:47.208794 | orchestrator | Thursday 05 March 2026 00:49:14 +0000 (0:00:01.801) 0:00:13.391 ******** 2026-03-05 01:00:47.208800 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:00:47.208825 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:47.208832 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:47.208839 | orchestrator | 2026-03-05 01:00:47.208845 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-05 01:00:47.208939 | orchestrator | Thursday 05 March 2026 00:49:17 +0000 (0:00:03.423) 0:00:16.814 ******** 2026-03-05 01:00:47.208947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 01:00:47.208954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 01:00:47.208959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 01:00:47.208966 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.208972 | orchestrator | 2026-03-05 01:00:47.208979 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-05 01:00:47.208984 | orchestrator | Thursday 05 March 2026 00:49:18 +0000 (0:00:01.481) 0:00:18.296 ******** 2026-03-05 01:00:47.208999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.209009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.209015 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.209021 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209028 | orchestrator | 2026-03-05 01:00:47.209035 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-05 01:00:47.209041 | orchestrator | Thursday 05 March 2026 00:49:20 +0000 (0:00:01.409) 0:00:19.706 ******** 2026-03-05 01:00:47.209049 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.209065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.209073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.209079 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209085 | orchestrator | 2026-03-05 01:00:47.209092 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-05 01:00:47.209098 | orchestrator | Thursday 05 March 2026 00:49:21 +0000 (0:00:00.654) 0:00:20.361 ******** 2026-03-05 01:00:47.209116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-05 00:49:14.820599', 'end': '2026-03-05 00:49:14.938117', 'delta': '0:00:00.117518', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.209125 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-05 00:49:16.316166', 'end': '2026-03-05 00:49:16.450862', 'delta': '0:00:00.134696', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.209155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-05 00:49:17.043762', 'end': '2026-03-05 00:49:17.128794', 'delta': '0:00:00.085032', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.209163 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209169 | orchestrator | 2026-03-05 01:00:47.209174 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-05 01:00:47.209180 | orchestrator | Thursday 05 March 2026 00:49:21 +0000 (0:00:00.329) 0:00:20.691 ******** 2026-03-05 01:00:47.209185 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.209191 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.209202 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.209208 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.209214 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.209220 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.209225 | orchestrator | 2026-03-05 01:00:47.209232 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-05 01:00:47.209238 | orchestrator | Thursday 05 March 2026 00:49:24 +0000 (0:00:03.383) 0:00:24.074 ******** 2026-03-05 01:00:47.209244 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.209249 | orchestrator | 2026-03-05 01:00:47.209253 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-05 01:00:47.209256 | orchestrator | Thursday 05 March 2026 00:49:25 +0000 (0:00:01.131) 0:00:25.205 ******** 2026-03-05 01:00:47.209260 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209271 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209275 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209279 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209289 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209292 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209296 | orchestrator | 2026-03-05 01:00:47.209300 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-05 01:00:47.209304 | orchestrator | Thursday 05 March 2026 00:49:27 +0000 (0:00:01.598) 0:00:26.804 ******** 2026-03-05 01:00:47.209308 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209311 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209315 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209319 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209323 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209327 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209330 | orchestrator | 2026-03-05 01:00:47.209334 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-05 01:00:47.209338 | orchestrator | Thursday 05 March 2026 00:49:30 +0000 (0:00:02.723) 0:00:29.530 ******** 2026-03-05 01:00:47.209342 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209346 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209349 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209353 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209357 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209361 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209364 | orchestrator | 2026-03-05 01:00:47.209368 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-05 01:00:47.209372 | orchestrator | Thursday 05 March 2026 00:49:32 +0000 (0:00:01.865) 0:00:31.396 ******** 2026-03-05 01:00:47.209376 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209380 | orchestrator | 2026-03-05 01:00:47.209384 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-05 01:00:47.209387 | orchestrator | Thursday 05 March 2026 00:49:32 +0000 (0:00:00.285) 0:00:31.682 ******** 2026-03-05 01:00:47.209391 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209395 | orchestrator | 2026-03-05 01:00:47.209399 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-05 01:00:47.209403 | orchestrator | Thursday 05 March 2026 00:49:32 +0000 (0:00:00.323) 0:00:32.005 ******** 2026-03-05 01:00:47.209406 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209410 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209414 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209423 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209427 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209430 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209434 | orchestrator | 2026-03-05 01:00:47.209438 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-05 01:00:47.209442 | orchestrator | Thursday 05 March 2026 00:49:33 +0000 (0:00:00.941) 0:00:32.946 ******** 2026-03-05 01:00:47.209506 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209510 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209514 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209517 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209521 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209525 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209529 | orchestrator | 2026-03-05 01:00:47.209533 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-05 01:00:47.209537 | orchestrator | Thursday 05 March 2026 00:49:34 +0000 (0:00:00.903) 0:00:33.850 ******** 2026-03-05 01:00:47.209541 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209545 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209548 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209569 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209574 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209578 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209581 | orchestrator | 2026-03-05 01:00:47.209585 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-05 01:00:47.209589 | orchestrator | Thursday 05 March 2026 00:49:35 +0000 (0:00:00.794) 0:00:34.644 ******** 2026-03-05 01:00:47.209593 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209597 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209601 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209608 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209612 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209616 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209643 | orchestrator | 2026-03-05 01:00:47.209648 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-05 01:00:47.209652 | orchestrator | Thursday 05 March 2026 00:49:36 +0000 (0:00:01.157) 0:00:35.802 ******** 2026-03-05 01:00:47.209656 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209659 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209663 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209667 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209671 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209674 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209678 | orchestrator | 2026-03-05 01:00:47.209682 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-05 01:00:47.209686 | orchestrator | Thursday 05 March 2026 00:49:37 +0000 (0:00:01.312) 0:00:37.115 ******** 2026-03-05 01:00:47.209690 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209694 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209697 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209701 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209705 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209709 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209713 | orchestrator | 2026-03-05 01:00:47.209716 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-05 01:00:47.209720 | orchestrator | Thursday 05 March 2026 00:49:39 +0000 (0:00:01.300) 0:00:38.416 ******** 2026-03-05 01:00:47.209724 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.209728 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.209732 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.209736 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.209739 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.209743 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.209747 | orchestrator | 2026-03-05 01:00:47.209751 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-05 01:00:47.209754 | orchestrator | Thursday 05 March 2026 00:49:39 +0000 (0:00:00.869) 0:00:39.285 ******** 2026-03-05 01:00:47.209759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e61642d--a609--5f4c--883e--a16b698ed397-osd--block--8e61642d--a609--5f4c--883e--a16b698ed397', 'dm-uuid-LVM-LbLRM4MoU7LrtCpLRhZ98aBrXC5CKd9TorD81YopypD0x28jJAK8Hq9clofUSZiz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1a9c38f8--c56f--5625--8ade--2e45962405d2-osd--block--1a9c38f8--c56f--5625--8ade--2e45962405d2', 'dm-uuid-LVM-Awd6JFTEZhabPZZ269I3lfUatL84usmfNrJzp1u0OfKxZ9ov2M1W0FL1CTfuuxfS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--487cf15b--a3c4--55bb--8565--d1e78d85d824-osd--block--487cf15b--a3c4--55bb--8565--d1e78d85d824', 'dm-uuid-LVM-Dt8XNnSQe3wlln96iskXeizrfvxQBhuXH3Sg7aJ4PaS3fhgGCsS4rDsTtuxSPKbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04f48836--d47d--5181--a61a--7e2c62572595-osd--block--04f48836--d47d--5181--a61a--7e2c62572595', 'dm-uuid-LVM-OJN9tS92YMA7b805RALhO0UBIRFqsk88oV19gmqjAodf7KHfSG0FCr1O8vHcprn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.209852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8e61642d--a609--5f4c--883e--a16b698ed397-osd--block--8e61642d--a609--5f4c--883e--a16b698ed397'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AFl9q1-L64n-Gj7c-kBPf-4pLx-6hdv-2dXo3s', 'scsi-0QEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4', 'scsi-SQEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.209865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1a9c38f8--c56f--5625--8ade--2e45962405d2-osd--block--1a9c38f8--c56f--5625--8ade--2e45962405d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B01kgl-LOeW-EjUU-UANj-Hb1R-VO9H-0ZSNyu', 'scsi-0QEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b', 'scsi-SQEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.209880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada', 'scsi-SQEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.209892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.209901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bb27c3c1--5e00--588a--af48--66c3e9a20c72-osd--block--bb27c3c1--5e00--588a--af48--66c3e9a20c72', 'dm-uuid-LVM-aAhEHT9pjwGSpfrIrtjDtv5kGox94UV3Hcd8aIBrz2VbIQnyCRFrxK1WBmY4wZuT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.209919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.209928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--487cf15b--a3c4--55bb--8565--d1e78d85d824-osd--block--487cf15b--a3c4--55bb--8565--d1e78d85d824'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2gyAHZ-CD5F-8jUg-pmWW-VCFj-v7X8-fe5qeY', 'scsi-0QEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980', 'scsi-SQEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--04f48836--d47d--5181--a61a--7e2c62572595-osd--block--04f48836--d47d--5181--a61a--7e2c62572595'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PS3SUM-PYZF-ELRU-RN5I-RCkV-E6ZE-TFZhn0', 'scsi-0QEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b', 'scsi-SQEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5', 'scsi-SQEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52eeae7c--0ac3--5716--aafe--40e466221a22-osd--block--52eeae7c--0ac3--5716--aafe--40e466221a22', 'dm-uuid-LVM-yisYFX54apoGhi6gycsqiSU5w2pvRttzJJr37NcZ9qiTzIf7Tb0paCfHpcE4eNSQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210376 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.210381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part1', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part14', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part15', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part16', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bb27c3c1--5e00--588a--af48--66c3e9a20c72-osd--block--bb27c3c1--5e00--588a--af48--66c3e9a20c72'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NdTYuF-6z14-ZW1D-7Z0k-Kg9t-W74X-gW7nVL', 'scsi-0QEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27', 'scsi-SQEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210484 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.210488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--52eeae7c--0ac3--5716--aafe--40e466221a22-osd--block--52eeae7c--0ac3--5716--aafe--40e466221a22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WrdaDs-mwcO-AhgX-fS5E-xeBY-IK1o-ejxiDn', 'scsi-0QEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5', 'scsi-SQEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1', 'scsi-SQEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210573 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.210577 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.210581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part1', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part14', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part15', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part16', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210660 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.210670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:00:47.210703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:00:47.210745 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.210750 | orchestrator | 2026-03-05 01:00:47.210754 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-05 01:00:47.210759 | orchestrator | Thursday 05 March 2026 00:49:44 +0000 (0:00:04.870) 0:00:44.156 ******** 2026-03-05 01:00:47.210776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e61642d--a609--5f4c--883e--a16b698ed397-osd--block--8e61642d--a609--5f4c--883e--a16b698ed397', 'dm-uuid-LVM-LbLRM4MoU7LrtCpLRhZ98aBrXC5CKd9TorD81YopypD0x28jJAK8Hq9clofUSZiz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1a9c38f8--c56f--5625--8ade--2e45962405d2-osd--block--1a9c38f8--c56f--5625--8ade--2e45962405d2', 'dm-uuid-LVM-Awd6JFTEZhabPZZ269I3lfUatL84usmfNrJzp1u0OfKxZ9ov2M1W0FL1CTfuuxfS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210816 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210823 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210834 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210838 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8e61642d--a609--5f4c--883e--a16b698ed397-osd--block--8e61642d--a609--5f4c--883e--a16b698ed397'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AFl9q1-L64n-Gj7c-kBPf-4pLx-6hdv-2dXo3s', 'scsi-0QEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4', 'scsi-SQEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1a9c38f8--c56f--5625--8ade--2e45962405d2-osd--block--1a9c38f8--c56f--5625--8ade--2e45962405d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B01kgl-LOeW-EjUU-UANj-Hb1R-VO9H-0ZSNyu', 'scsi-0QEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b', 'scsi-SQEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada', 'scsi-SQEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210900 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--487cf15b--a3c4--55bb--8565--d1e78d85d824-osd--block--487cf15b--a3c4--55bb--8565--d1e78d85d824', 'dm-uuid-LVM-Dt8XNnSQe3wlln96iskXeizrfvxQBhuXH3Sg7aJ4PaS3fhgGCsS4rDsTtuxSPKbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04f48836--d47d--5181--a61a--7e2c62572595-osd--block--04f48836--d47d--5181--a61a--7e2c62572595', 'dm-uuid-LVM-OJN9tS92YMA7b805RALhO0UBIRFqsk88oV19gmqjAodf7KHfSG0FCr1O8vHcprn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210917 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.210922 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210943 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.210994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bb27c3c1--5e00--588a--af48--66c3e9a20c72-osd--block--bb27c3c1--5e00--588a--af48--66c3e9a20c72', 'dm-uuid-LVM-aAhEHT9pjwGSpfrIrtjDtv5kGox94UV3Hcd8aIBrz2VbIQnyCRFrxK1WBmY4wZuT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52eeae7c--0ac3--5716--aafe--40e466221a22-osd--block--52eeae7c--0ac3--5716--aafe--40e466221a22', 'dm-uuid-LVM-yisYFX54apoGhi6gycsqiSU5w2pvRttzJJr37NcZ9qiTzIf7Tb0paCfHpcE4eNSQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211015 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--487cf15b--a3c4--55bb--8565--d1e78d85d824-osd--block--487cf15b--a3c4--55bb--8565--d1e78d85d824'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2gyAHZ-CD5F-8jUg-pmWW-VCFj-v7X8-fe5qeY', 'scsi-0QEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980', 'scsi-SQEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211032 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--04f48836--d47d--5181--a61a--7e2c62572595-osd--block--04f48836--d47d--5181--a61a--7e2c62572595'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PS3SUM-PYZF-ELRU-RN5I-RCkV-E6ZE-TFZhn0', 'scsi-0QEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b', 'scsi-SQEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211040 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5', 'scsi-SQEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211045 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211050 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211055 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211063 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.211071 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211075 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211083 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211088 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211093 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211098 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211108 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211116 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211124 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211129 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211351 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3667f1e-5067-4036-b179-f7ed5b88883b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211368 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211376 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211388 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211392 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211399 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.211403 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211417 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part1', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part14', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part15', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part16', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211421 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211428 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bb27c3c1--5e00--588a--af48--66c3e9a20c72-osd--block--bb27c3c1--5e00--588a--af48--66c3e9a20c72'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NdTYuF-6z14-ZW1D-7Z0k-Kg9t-W74X-gW7nVL', 'scsi-0QEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27', 'scsi-SQEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--52eeae7c--0ac3--5716--aafe--40e466221a22-osd--block--52eeae7c--0ac3--5716--aafe--40e466221a22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WrdaDs-mwcO-AhgX-fS5E-xeBY-IK1o-ejxiDn', 'scsi-0QEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5', 'scsi-SQEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211440 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211450 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1', 'scsi-SQEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211461 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211465 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211483 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211489 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211493 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211500 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211504 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211516 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0aa8b33-2596-46db-b782-3e102abbb8d9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211521 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211528 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211532 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211536 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.211540 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211544 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.211554 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part1', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part14', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part15', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part16', 'scsi-SQEMU_QEMU_HARDDISK_40dcebc6-2ad4-440f-87a2-f05db4a8eb90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211561 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:00:47.211565 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.211569 | orchestrator | 2026-03-05 01:00:47.211573 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-05 01:00:47.211577 | orchestrator | Thursday 05 March 2026 00:49:48 +0000 (0:00:04.179) 0:00:48.335 ******** 2026-03-05 01:00:47.211581 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.211585 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.211589 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.211593 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.211597 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.211601 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.211668 | orchestrator | 2026-03-05 01:00:47.211672 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-05 01:00:47.211676 | orchestrator | Thursday 05 March 2026 00:49:50 +0000 (0:00:01.926) 0:00:50.262 ******** 2026-03-05 01:00:47.211680 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.211684 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.211688 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.211691 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.211695 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.211699 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.211703 | orchestrator | 2026-03-05 01:00:47.211706 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-05 01:00:47.211710 | orchestrator | Thursday 05 March 2026 00:49:52 +0000 (0:00:01.153) 0:00:51.416 ******** 2026-03-05 01:00:47.211714 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.211718 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.211721 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.211725 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.211729 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.211733 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.211737 | orchestrator | 2026-03-05 01:00:47.211740 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-05 01:00:47.211748 | orchestrator | Thursday 05 March 2026 00:49:53 +0000 (0:00:01.760) 0:00:53.176 ******** 2026-03-05 01:00:47.211752 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.211760 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.211763 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.211767 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.211771 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.211774 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.211778 | orchestrator | 2026-03-05 01:00:47.211782 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-05 01:00:47.211786 | orchestrator | Thursday 05 March 2026 00:49:54 +0000 (0:00:00.843) 0:00:54.020 ******** 2026-03-05 01:00:47.211790 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.211793 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.211797 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.211801 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.211805 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.211808 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.211812 | orchestrator | 2026-03-05 01:00:47.211816 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-05 01:00:47.211820 | orchestrator | Thursday 05 March 2026 00:49:56 +0000 (0:00:01.752) 0:00:55.773 ******** 2026-03-05 01:00:47.211826 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.211830 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.211833 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.211837 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.211841 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.211845 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.211848 | orchestrator | 2026-03-05 01:00:47.211852 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-05 01:00:47.211856 | orchestrator | Thursday 05 March 2026 00:49:57 +0000 (0:00:01.459) 0:00:57.233 ******** 2026-03-05 01:00:47.211860 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-05 01:00:47.211864 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-05 01:00:47.211868 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-05 01:00:47.211871 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-05 01:00:47.211875 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-05 01:00:47.211879 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-05 01:00:47.211883 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-05 01:00:47.211886 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-05 01:00:47.211890 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-05 01:00:47.211894 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-05 01:00:47.211898 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-05 01:00:47.211901 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-05 01:00:47.211905 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-05 01:00:47.211909 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-05 01:00:47.211912 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-05 01:00:47.211916 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-05 01:00:47.211920 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-05 01:00:47.211924 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-05 01:00:47.211927 | orchestrator | 2026-03-05 01:00:47.211931 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-05 01:00:47.211935 | orchestrator | Thursday 05 March 2026 00:50:03 +0000 (0:00:05.631) 0:01:02.865 ******** 2026-03-05 01:00:47.211939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 01:00:47.211943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 01:00:47.211947 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-05 01:00:47.211950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 01:00:47.211958 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-05 01:00:47.211962 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-05 01:00:47.211965 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.211969 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-05 01:00:47.211973 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-05 01:00:47.211977 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-05 01:00:47.211980 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.211984 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 01:00:47.211988 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.211992 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 01:00:47.211995 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 01:00:47.211999 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-05 01:00:47.212003 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-05 01:00:47.212007 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-05 01:00:47.212010 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.212014 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.212018 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-05 01:00:47.212022 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-05 01:00:47.212025 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-05 01:00:47.212029 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.212033 | orchestrator | 2026-03-05 01:00:47.212037 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-05 01:00:47.212040 | orchestrator | Thursday 05 March 2026 00:50:05 +0000 (0:00:01.544) 0:01:04.409 ******** 2026-03-05 01:00:47.212047 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.212055 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.212060 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.212071 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.212080 | orchestrator | 2026-03-05 01:00:47.212086 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-05 01:00:47.212092 | orchestrator | Thursday 05 March 2026 00:50:08 +0000 (0:00:03.122) 0:01:07.531 ******** 2026-03-05 01:00:47.212098 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212103 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.212109 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.212115 | orchestrator | 2026-03-05 01:00:47.212121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-05 01:00:47.212128 | orchestrator | Thursday 05 March 2026 00:50:08 +0000 (0:00:00.444) 0:01:07.976 ******** 2026-03-05 01:00:47.212177 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212185 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.212190 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.212196 | orchestrator | 2026-03-05 01:00:47.212204 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-05 01:00:47.212212 | orchestrator | Thursday 05 March 2026 00:50:09 +0000 (0:00:00.450) 0:01:08.426 ******** 2026-03-05 01:00:47.212215 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212219 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.212223 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.212227 | orchestrator | 2026-03-05 01:00:47.212230 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-05 01:00:47.212234 | orchestrator | Thursday 05 March 2026 00:50:09 +0000 (0:00:00.905) 0:01:09.332 ******** 2026-03-05 01:00:47.212238 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.212242 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.212251 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.212255 | orchestrator | 2026-03-05 01:00:47.212258 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-05 01:00:47.212262 | orchestrator | Thursday 05 March 2026 00:50:10 +0000 (0:00:00.773) 0:01:10.105 ******** 2026-03-05 01:00:47.212266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.212270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.212273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.212277 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212281 | orchestrator | 2026-03-05 01:00:47.212285 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-05 01:00:47.212288 | orchestrator | Thursday 05 March 2026 00:50:11 +0000 (0:00:00.419) 0:01:10.525 ******** 2026-03-05 01:00:47.212292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.212296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.212300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.212304 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212307 | orchestrator | 2026-03-05 01:00:47.212311 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-05 01:00:47.212315 | orchestrator | Thursday 05 March 2026 00:50:11 +0000 (0:00:00.390) 0:01:10.916 ******** 2026-03-05 01:00:47.212319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.212322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.212326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.212330 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212334 | orchestrator | 2026-03-05 01:00:47.212337 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-05 01:00:47.212341 | orchestrator | Thursday 05 March 2026 00:50:12 +0000 (0:00:00.448) 0:01:11.365 ******** 2026-03-05 01:00:47.212345 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.212349 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.212353 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.212356 | orchestrator | 2026-03-05 01:00:47.212360 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-05 01:00:47.212364 | orchestrator | Thursday 05 March 2026 00:50:12 +0000 (0:00:00.352) 0:01:11.717 ******** 2026-03-05 01:00:47.212368 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-05 01:00:47.212372 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-05 01:00:47.212375 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-05 01:00:47.212379 | orchestrator | 2026-03-05 01:00:47.212383 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-05 01:00:47.212387 | orchestrator | Thursday 05 March 2026 00:50:13 +0000 (0:00:01.598) 0:01:13.316 ******** 2026-03-05 01:00:47.212390 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:00:47.212394 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:47.212398 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:47.212402 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-05 01:00:47.212406 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-05 01:00:47.212409 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-05 01:00:47.212413 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-05 01:00:47.212417 | orchestrator | 2026-03-05 01:00:47.212421 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-05 01:00:47.212424 | orchestrator | Thursday 05 March 2026 00:50:14 +0000 (0:00:00.805) 0:01:14.122 ******** 2026-03-05 01:00:47.212428 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:00:47.212435 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:47.212442 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:47.212446 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-05 01:00:47.212450 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-05 01:00:47.212454 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-05 01:00:47.212457 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-05 01:00:47.212461 | orchestrator | 2026-03-05 01:00:47.212465 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 01:00:47.212469 | orchestrator | Thursday 05 March 2026 00:50:16 +0000 (0:00:01.981) 0:01:16.103 ******** 2026-03-05 01:00:47.212473 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.212478 | orchestrator | 2026-03-05 01:00:47.212482 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 01:00:47.212488 | orchestrator | Thursday 05 March 2026 00:50:18 +0000 (0:00:01.265) 0:01:17.369 ******** 2026-03-05 01:00:47.212492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.212496 | orchestrator | 2026-03-05 01:00:47.212500 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 01:00:47.212504 | orchestrator | Thursday 05 March 2026 00:50:19 +0000 (0:00:01.343) 0:01:18.713 ******** 2026-03-05 01:00:47.212507 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212511 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.212515 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.212519 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.212522 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.212526 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.212530 | orchestrator | 2026-03-05 01:00:47.212534 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 01:00:47.212537 | orchestrator | Thursday 05 March 2026 00:50:20 +0000 (0:00:01.422) 0:01:20.135 ******** 2026-03-05 01:00:47.212541 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.212545 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.212549 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.212553 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.212556 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.212560 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.212564 | orchestrator | 2026-03-05 01:00:47.212568 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 01:00:47.212571 | orchestrator | Thursday 05 March 2026 00:50:21 +0000 (0:00:00.879) 0:01:21.015 ******** 2026-03-05 01:00:47.212575 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.212579 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.212583 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.212586 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.212653 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.212657 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.212661 | orchestrator | 2026-03-05 01:00:47.212665 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 01:00:47.212669 | orchestrator | Thursday 05 March 2026 00:50:22 +0000 (0:00:01.069) 0:01:22.085 ******** 2026-03-05 01:00:47.212672 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.212676 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.212680 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.212684 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.212692 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.212696 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.212699 | orchestrator | 2026-03-05 01:00:47.212703 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 01:00:47.212707 | orchestrator | Thursday 05 March 2026 00:50:24 +0000 (0:00:01.530) 0:01:23.615 ******** 2026-03-05 01:00:47.212711 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212715 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.212718 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.212722 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.212726 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.212730 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.212733 | orchestrator | 2026-03-05 01:00:47.212737 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 01:00:47.212741 | orchestrator | Thursday 05 March 2026 00:50:26 +0000 (0:00:01.993) 0:01:25.609 ******** 2026-03-05 01:00:47.212745 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212748 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.212752 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.212756 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.212760 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.212763 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.212767 | orchestrator | 2026-03-05 01:00:47.212771 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 01:00:47.212775 | orchestrator | Thursday 05 March 2026 00:50:27 +0000 (0:00:01.137) 0:01:26.747 ******** 2026-03-05 01:00:47.212778 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212782 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.212786 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.212790 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.212793 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.212797 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.212801 | orchestrator | 2026-03-05 01:00:47.212805 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 01:00:47.212808 | orchestrator | Thursday 05 March 2026 00:50:28 +0000 (0:00:01.306) 0:01:28.053 ******** 2026-03-05 01:00:47.212812 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.212816 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.212820 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.212824 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.212827 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.212831 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.212835 | orchestrator | 2026-03-05 01:00:47.212841 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 01:00:47.212845 | orchestrator | Thursday 05 March 2026 00:50:30 +0000 (0:00:01.355) 0:01:29.408 ******** 2026-03-05 01:00:47.212849 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.212853 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.212856 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.212860 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.212864 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.212867 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.212871 | orchestrator | 2026-03-05 01:00:47.212875 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 01:00:47.212879 | orchestrator | Thursday 05 March 2026 00:50:31 +0000 (0:00:01.673) 0:01:31.082 ******** 2026-03-05 01:00:47.212883 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212886 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.212890 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.212894 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.212898 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.212901 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.212905 | orchestrator | 2026-03-05 01:00:47.212909 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 01:00:47.212919 | orchestrator | Thursday 05 March 2026 00:50:32 +0000 (0:00:00.800) 0:01:31.882 ******** 2026-03-05 01:00:47.212923 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.212927 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.212930 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.212934 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.212938 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.212942 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.212945 | orchestrator | 2026-03-05 01:00:47.212949 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 01:00:47.212953 | orchestrator | Thursday 05 March 2026 00:50:34 +0000 (0:00:02.115) 0:01:33.998 ******** 2026-03-05 01:00:47.212957 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.212960 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.212964 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.212968 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.212972 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.212975 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.212979 | orchestrator | 2026-03-05 01:00:47.212983 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 01:00:47.212987 | orchestrator | Thursday 05 March 2026 00:50:36 +0000 (0:00:01.526) 0:01:35.524 ******** 2026-03-05 01:00:47.212990 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.212994 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.212998 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.213002 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213005 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213009 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213013 | orchestrator | 2026-03-05 01:00:47.213016 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 01:00:47.213020 | orchestrator | Thursday 05 March 2026 00:50:37 +0000 (0:00:01.039) 0:01:36.563 ******** 2026-03-05 01:00:47.213024 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.213028 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.213032 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.213035 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213039 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213043 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213047 | orchestrator | 2026-03-05 01:00:47.213050 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 01:00:47.213054 | orchestrator | Thursday 05 March 2026 00:50:38 +0000 (0:00:01.112) 0:01:37.676 ******** 2026-03-05 01:00:47.213058 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213062 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213065 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213069 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213073 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213076 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213080 | orchestrator | 2026-03-05 01:00:47.213084 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 01:00:47.213088 | orchestrator | Thursday 05 March 2026 00:50:40 +0000 (0:00:01.693) 0:01:39.370 ******** 2026-03-05 01:00:47.213091 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213095 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213099 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213103 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213106 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213110 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213114 | orchestrator | 2026-03-05 01:00:47.213118 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 01:00:47.213121 | orchestrator | Thursday 05 March 2026 00:50:40 +0000 (0:00:00.791) 0:01:40.161 ******** 2026-03-05 01:00:47.213125 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213129 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213152 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213159 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.213163 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.213167 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.213171 | orchestrator | 2026-03-05 01:00:47.213175 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 01:00:47.213179 | orchestrator | Thursday 05 March 2026 00:50:42 +0000 (0:00:01.403) 0:01:41.565 ******** 2026-03-05 01:00:47.213182 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.213186 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.213190 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.213194 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.213198 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.213202 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.213205 | orchestrator | 2026-03-05 01:00:47.213209 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 01:00:47.213213 | orchestrator | Thursday 05 March 2026 00:50:43 +0000 (0:00:00.804) 0:01:42.369 ******** 2026-03-05 01:00:47.213217 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.213221 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.213225 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.213228 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.213232 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.213236 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.213240 | orchestrator | 2026-03-05 01:00:47.213248 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-05 01:00:47.213252 | orchestrator | Thursday 05 March 2026 00:50:44 +0000 (0:00:01.469) 0:01:43.838 ******** 2026-03-05 01:00:47.213256 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.213260 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.213264 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.213274 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.213278 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.213281 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.213285 | orchestrator | 2026-03-05 01:00:47.213289 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-05 01:00:47.213293 | orchestrator | Thursday 05 March 2026 00:50:46 +0000 (0:00:01.708) 0:01:45.547 ******** 2026-03-05 01:00:47.213297 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.213301 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.213304 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.213308 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.213312 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.213316 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.213320 | orchestrator | 2026-03-05 01:00:47.213326 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-05 01:00:47.213330 | orchestrator | Thursday 05 March 2026 00:50:49 +0000 (0:00:03.701) 0:01:49.249 ******** 2026-03-05 01:00:47.213334 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.213338 | orchestrator | 2026-03-05 01:00:47.213342 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-05 01:00:47.213345 | orchestrator | Thursday 05 March 2026 00:50:51 +0000 (0:00:01.184) 0:01:50.433 ******** 2026-03-05 01:00:47.213349 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213353 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213408 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213412 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213416 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213420 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213424 | orchestrator | 2026-03-05 01:00:47.213427 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-05 01:00:47.213431 | orchestrator | Thursday 05 March 2026 00:50:51 +0000 (0:00:00.639) 0:01:51.072 ******** 2026-03-05 01:00:47.213441 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213445 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213448 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213452 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213456 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213460 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213464 | orchestrator | 2026-03-05 01:00:47.213468 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-05 01:00:47.213471 | orchestrator | Thursday 05 March 2026 00:50:52 +0000 (0:00:00.851) 0:01:51.924 ******** 2026-03-05 01:00:47.213475 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 01:00:47.213479 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 01:00:47.213504 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 01:00:47.213508 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 01:00:47.213512 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 01:00:47.213516 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-05 01:00:47.213520 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 01:00:47.213524 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 01:00:47.213527 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 01:00:47.213531 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 01:00:47.213535 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 01:00:47.213539 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-05 01:00:47.213542 | orchestrator | 2026-03-05 01:00:47.213546 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-05 01:00:47.213550 | orchestrator | Thursday 05 March 2026 00:50:54 +0000 (0:00:01.718) 0:01:53.643 ******** 2026-03-05 01:00:47.213554 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.213557 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.213561 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.213565 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.213569 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.213572 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.213576 | orchestrator | 2026-03-05 01:00:47.213580 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-05 01:00:47.213584 | orchestrator | Thursday 05 March 2026 00:50:55 +0000 (0:00:01.456) 0:01:55.100 ******** 2026-03-05 01:00:47.213588 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213591 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213595 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213599 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213603 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213606 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213610 | orchestrator | 2026-03-05 01:00:47.213614 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-05 01:00:47.213618 | orchestrator | Thursday 05 March 2026 00:50:56 +0000 (0:00:00.634) 0:01:55.734 ******** 2026-03-05 01:00:47.213622 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213628 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213632 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213636 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213639 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213643 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213647 | orchestrator | 2026-03-05 01:00:47.213651 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-05 01:00:47.213658 | orchestrator | Thursday 05 March 2026 00:50:57 +0000 (0:00:01.083) 0:01:56.818 ******** 2026-03-05 01:00:47.213662 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213666 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213669 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213673 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213677 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213681 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213685 | orchestrator | 2026-03-05 01:00:47.213689 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-05 01:00:47.213692 | orchestrator | Thursday 05 March 2026 00:50:58 +0000 (0:00:00.616) 0:01:57.434 ******** 2026-03-05 01:00:47.213699 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.213703 | orchestrator | 2026-03-05 01:00:47.213707 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-05 01:00:47.213710 | orchestrator | Thursday 05 March 2026 00:50:59 +0000 (0:00:01.456) 0:01:58.891 ******** 2026-03-05 01:00:47.213714 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.213718 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.213722 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.213726 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.213729 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.213733 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.213737 | orchestrator | 2026-03-05 01:00:47.213741 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-05 01:00:47.213745 | orchestrator | Thursday 05 March 2026 00:51:43 +0000 (0:00:44.337) 0:02:43.228 ******** 2026-03-05 01:00:47.213748 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 01:00:47.213752 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 01:00:47.213756 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 01:00:47.213760 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213764 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 01:00:47.213767 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 01:00:47.213771 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 01:00:47.213775 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 01:00:47.213779 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 01:00:47.213783 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 01:00:47.213786 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213790 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 01:00:47.213794 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 01:00:47.213798 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 01:00:47.213802 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213805 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 01:00:47.213809 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 01:00:47.213813 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 01:00:47.213817 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213821 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213824 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-05 01:00:47.213830 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-05 01:00:47.213842 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-05 01:00:47.213849 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213855 | orchestrator | 2026-03-05 01:00:47.213865 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-05 01:00:47.213873 | orchestrator | Thursday 05 March 2026 00:51:44 +0000 (0:00:00.827) 0:02:44.055 ******** 2026-03-05 01:00:47.213879 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213885 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213890 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213896 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.213904 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.213910 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.213915 | orchestrator | 2026-03-05 01:00:47.213922 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-05 01:00:47.213949 | orchestrator | Thursday 05 March 2026 00:51:45 +0000 (0:00:00.919) 0:02:44.975 ******** 2026-03-05 01:00:47.213956 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213962 | orchestrator | 2026-03-05 01:00:47.213969 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-05 01:00:47.213976 | orchestrator | Thursday 05 March 2026 00:51:45 +0000 (0:00:00.148) 0:02:45.124 ******** 2026-03-05 01:00:47.213980 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.213984 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.213988 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.213991 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214053 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214064 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214070 | orchestrator | 2026-03-05 01:00:47.214074 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-05 01:00:47.214078 | orchestrator | Thursday 05 March 2026 00:51:46 +0000 (0:00:00.857) 0:02:45.981 ******** 2026-03-05 01:00:47.214082 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214085 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214089 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214093 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214097 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214101 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214104 | orchestrator | 2026-03-05 01:00:47.214109 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-05 01:00:47.214116 | orchestrator | Thursday 05 March 2026 00:51:47 +0000 (0:00:01.304) 0:02:47.285 ******** 2026-03-05 01:00:47.214123 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214129 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214149 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214156 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214162 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214169 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214175 | orchestrator | 2026-03-05 01:00:47.214186 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-05 01:00:47.214192 | orchestrator | Thursday 05 March 2026 00:51:48 +0000 (0:00:00.917) 0:02:48.203 ******** 2026-03-05 01:00:47.214198 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.214203 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.214209 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.214215 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.214222 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.214228 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.214234 | orchestrator | 2026-03-05 01:00:47.214241 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-05 01:00:47.214247 | orchestrator | Thursday 05 March 2026 00:51:51 +0000 (0:00:02.728) 0:02:50.931 ******** 2026-03-05 01:00:47.214254 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.214266 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.214273 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.214279 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.214285 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.214291 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.214298 | orchestrator | 2026-03-05 01:00:47.214301 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-05 01:00:47.214307 | orchestrator | Thursday 05 March 2026 00:51:52 +0000 (0:00:00.846) 0:02:51.777 ******** 2026-03-05 01:00:47.214314 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.214321 | orchestrator | 2026-03-05 01:00:47.214328 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-05 01:00:47.214334 | orchestrator | Thursday 05 March 2026 00:51:54 +0000 (0:00:01.668) 0:02:53.446 ******** 2026-03-05 01:00:47.214340 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214346 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214351 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214358 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214364 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214370 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214376 | orchestrator | 2026-03-05 01:00:47.214382 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-05 01:00:47.214389 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:01.172) 0:02:54.618 ******** 2026-03-05 01:00:47.214395 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214401 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214408 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214414 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214421 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214427 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214434 | orchestrator | 2026-03-05 01:00:47.214440 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-05 01:00:47.214446 | orchestrator | Thursday 05 March 2026 00:51:55 +0000 (0:00:00.655) 0:02:55.274 ******** 2026-03-05 01:00:47.214453 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214459 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214465 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214472 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214478 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214484 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214491 | orchestrator | 2026-03-05 01:00:47.214497 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-05 01:00:47.214503 | orchestrator | Thursday 05 March 2026 00:51:56 +0000 (0:00:00.858) 0:02:56.132 ******** 2026-03-05 01:00:47.214509 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214516 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214523 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214527 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214531 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214534 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214538 | orchestrator | 2026-03-05 01:00:47.214542 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-05 01:00:47.214546 | orchestrator | Thursday 05 March 2026 00:51:57 +0000 (0:00:00.641) 0:02:56.773 ******** 2026-03-05 01:00:47.214549 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214553 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214557 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214561 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214564 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214568 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214572 | orchestrator | 2026-03-05 01:00:47.214576 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-05 01:00:47.214584 | orchestrator | Thursday 05 March 2026 00:51:58 +0000 (0:00:00.774) 0:02:57.547 ******** 2026-03-05 01:00:47.214588 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214591 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214598 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214602 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214606 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214610 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214613 | orchestrator | 2026-03-05 01:00:47.214617 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-05 01:00:47.214621 | orchestrator | Thursday 05 March 2026 00:51:58 +0000 (0:00:00.582) 0:02:58.130 ******** 2026-03-05 01:00:47.214625 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214628 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214632 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214636 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214640 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214643 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214647 | orchestrator | 2026-03-05 01:00:47.214651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-05 01:00:47.214655 | orchestrator | Thursday 05 March 2026 00:51:59 +0000 (0:00:00.960) 0:02:59.090 ******** 2026-03-05 01:00:47.214658 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.214662 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.214666 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.214670 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.214684 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.214688 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.214692 | orchestrator | 2026-03-05 01:00:47.214696 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-05 01:00:47.214700 | orchestrator | Thursday 05 March 2026 00:52:00 +0000 (0:00:00.781) 0:02:59.872 ******** 2026-03-05 01:00:47.214704 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.214707 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.214711 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.214715 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.214719 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.214722 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.214726 | orchestrator | 2026-03-05 01:00:47.214730 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-05 01:00:47.214734 | orchestrator | Thursday 05 March 2026 00:52:01 +0000 (0:00:01.437) 0:03:01.309 ******** 2026-03-05 01:00:47.214738 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.214742 | orchestrator | 2026-03-05 01:00:47.214746 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-05 01:00:47.214750 | orchestrator | Thursday 05 March 2026 00:52:03 +0000 (0:00:01.540) 0:03:02.849 ******** 2026-03-05 01:00:47.214754 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-05 01:00:47.214758 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-05 01:00:47.214761 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-05 01:00:47.214765 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-05 01:00:47.214769 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-05 01:00:47.214773 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-05 01:00:47.214777 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-05 01:00:47.214780 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-05 01:00:47.214784 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-05 01:00:47.214788 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-05 01:00:47.214796 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-05 01:00:47.214800 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-05 01:00:47.214803 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-05 01:00:47.214807 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-05 01:00:47.214811 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-05 01:00:47.214815 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-05 01:00:47.214819 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-05 01:00:47.214822 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-05 01:00:47.214826 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-05 01:00:47.214830 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-05 01:00:47.214834 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-05 01:00:47.214837 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-05 01:00:47.214841 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-05 01:00:47.214845 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-05 01:00:47.214848 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-05 01:00:47.214852 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-05 01:00:47.214856 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-05 01:00:47.214860 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-05 01:00:47.214863 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-05 01:00:47.214867 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-05 01:00:47.214871 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-05 01:00:47.214874 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-05 01:00:47.214878 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-05 01:00:47.214882 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-05 01:00:47.214886 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-05 01:00:47.214890 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-05 01:00:47.214897 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-05 01:00:47.214900 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-05 01:00:47.214904 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-05 01:00:47.214908 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-05 01:00:47.214912 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-05 01:00:47.214916 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-05 01:00:47.214919 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-05 01:00:47.214923 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-05 01:00:47.214927 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-05 01:00:47.214931 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-05 01:00:47.214934 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 01:00:47.214938 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 01:00:47.214945 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 01:00:47.214949 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 01:00:47.214953 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-05 01:00:47.214957 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 01:00:47.214960 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 01:00:47.214967 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 01:00:47.214971 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 01:00:47.214975 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 01:00:47.214978 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-05 01:00:47.214982 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 01:00:47.214986 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 01:00:47.214990 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 01:00:47.214993 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 01:00:47.214997 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 01:00:47.215001 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-05 01:00:47.215005 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 01:00:47.215008 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 01:00:47.215012 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 01:00:47.215016 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 01:00:47.215020 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-05 01:00:47.215023 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 01:00:47.215027 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 01:00:47.215031 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 01:00:47.215034 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 01:00:47.215038 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 01:00:47.215042 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-05 01:00:47.215046 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 01:00:47.215049 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 01:00:47.215053 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 01:00:47.215057 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 01:00:47.215061 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 01:00:47.215064 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 01:00:47.215068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-05 01:00:47.215072 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-05 01:00:47.215076 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 01:00:47.215080 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-05 01:00:47.215083 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-05 01:00:47.215087 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-05 01:00:47.215091 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-05 01:00:47.215095 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-05 01:00:47.215099 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-05 01:00:47.215102 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-05 01:00:47.215106 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-05 01:00:47.215110 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-05 01:00:47.215114 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-05 01:00:47.215124 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-05 01:00:47.215128 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-05 01:00:47.215246 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-05 01:00:47.215272 | orchestrator | 2026-03-05 01:00:47.215276 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-05 01:00:47.215280 | orchestrator | Thursday 05 March 2026 00:52:11 +0000 (0:00:07.521) 0:03:10.371 ******** 2026-03-05 01:00:47.215284 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215288 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215292 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215296 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.215300 | orchestrator | 2026-03-05 01:00:47.215304 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-05 01:00:47.215308 | orchestrator | Thursday 05 March 2026 00:52:12 +0000 (0:00:01.308) 0:03:11.680 ******** 2026-03-05 01:00:47.215319 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.215324 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.215327 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.215331 | orchestrator | 2026-03-05 01:00:47.215335 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-05 01:00:47.215339 | orchestrator | Thursday 05 March 2026 00:52:13 +0000 (0:00:00.995) 0:03:12.676 ******** 2026-03-05 01:00:47.215343 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.215346 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.215350 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.215354 | orchestrator | 2026-03-05 01:00:47.215358 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-05 01:00:47.215362 | orchestrator | Thursday 05 March 2026 00:52:14 +0000 (0:00:01.441) 0:03:14.117 ******** 2026-03-05 01:00:47.215366 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.215369 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.215373 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.215377 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215381 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215385 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215388 | orchestrator | 2026-03-05 01:00:47.215392 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-05 01:00:47.215396 | orchestrator | Thursday 05 March 2026 00:52:15 +0000 (0:00:00.853) 0:03:14.970 ******** 2026-03-05 01:00:47.215400 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.215403 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.215407 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.215411 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215415 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215419 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215422 | orchestrator | 2026-03-05 01:00:47.215426 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-05 01:00:47.215430 | orchestrator | Thursday 05 March 2026 00:52:16 +0000 (0:00:01.265) 0:03:16.236 ******** 2026-03-05 01:00:47.215434 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215438 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215442 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215451 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215454 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215458 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215462 | orchestrator | 2026-03-05 01:00:47.215466 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-05 01:00:47.215470 | orchestrator | Thursday 05 March 2026 00:52:17 +0000 (0:00:00.884) 0:03:17.121 ******** 2026-03-05 01:00:47.215473 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215477 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215481 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215485 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215489 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215492 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215496 | orchestrator | 2026-03-05 01:00:47.215500 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-05 01:00:47.215504 | orchestrator | Thursday 05 March 2026 00:52:19 +0000 (0:00:01.395) 0:03:18.517 ******** 2026-03-05 01:00:47.215508 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215511 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215515 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215519 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215523 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215526 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215530 | orchestrator | 2026-03-05 01:00:47.215534 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-05 01:00:47.215538 | orchestrator | Thursday 05 March 2026 00:52:19 +0000 (0:00:00.779) 0:03:19.296 ******** 2026-03-05 01:00:47.215541 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215545 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215549 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215553 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215557 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215560 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215564 | orchestrator | 2026-03-05 01:00:47.215574 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-05 01:00:47.215578 | orchestrator | Thursday 05 March 2026 00:52:20 +0000 (0:00:00.950) 0:03:20.246 ******** 2026-03-05 01:00:47.215582 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215585 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215589 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215593 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215597 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215601 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215604 | orchestrator | 2026-03-05 01:00:47.215608 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-05 01:00:47.215612 | orchestrator | Thursday 05 March 2026 00:52:21 +0000 (0:00:00.774) 0:03:21.021 ******** 2026-03-05 01:00:47.215616 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215620 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215623 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215627 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215631 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215635 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215638 | orchestrator | 2026-03-05 01:00:47.215645 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-05 01:00:47.215649 | orchestrator | Thursday 05 March 2026 00:52:22 +0000 (0:00:01.125) 0:03:22.147 ******** 2026-03-05 01:00:47.215653 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215656 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215660 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215664 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.215671 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.215675 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.215679 | orchestrator | 2026-03-05 01:00:47.215683 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-05 01:00:47.215687 | orchestrator | Thursday 05 March 2026 00:52:25 +0000 (0:00:02.957) 0:03:25.104 ******** 2026-03-05 01:00:47.215690 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.215694 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.215698 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.215702 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215706 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215709 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215713 | orchestrator | 2026-03-05 01:00:47.215717 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-05 01:00:47.215721 | orchestrator | Thursday 05 March 2026 00:52:26 +0000 (0:00:01.054) 0:03:26.158 ******** 2026-03-05 01:00:47.215725 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.215729 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.215733 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.215736 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215740 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215744 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215748 | orchestrator | 2026-03-05 01:00:47.215751 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-05 01:00:47.215755 | orchestrator | Thursday 05 March 2026 00:52:27 +0000 (0:00:00.959) 0:03:27.117 ******** 2026-03-05 01:00:47.215759 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215763 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215767 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215770 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215774 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215778 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215782 | orchestrator | 2026-03-05 01:00:47.215786 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-05 01:00:47.215790 | orchestrator | Thursday 05 March 2026 00:52:28 +0000 (0:00:01.037) 0:03:28.154 ******** 2026-03-05 01:00:47.215793 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.215797 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.215801 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.215805 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215809 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215813 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215817 | orchestrator | 2026-03-05 01:00:47.215820 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-05 01:00:47.215824 | orchestrator | Thursday 05 March 2026 00:52:29 +0000 (0:00:01.009) 0:03:29.164 ******** 2026-03-05 01:00:47.215829 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-05 01:00:47.215835 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-05 01:00:47.215843 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-05 01:00:47.215850 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-05 01:00:47.215854 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215858 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-05 01:00:47.215865 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-05 01:00:47.215869 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215873 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215877 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215881 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215885 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215888 | orchestrator | 2026-03-05 01:00:47.215892 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-05 01:00:47.215896 | orchestrator | Thursday 05 March 2026 00:52:31 +0000 (0:00:01.287) 0:03:30.452 ******** 2026-03-05 01:00:47.215900 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215903 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215908 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215911 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215915 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215919 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215923 | orchestrator | 2026-03-05 01:00:47.215927 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-05 01:00:47.215930 | orchestrator | Thursday 05 March 2026 00:52:31 +0000 (0:00:00.540) 0:03:30.992 ******** 2026-03-05 01:00:47.215934 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215938 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215942 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215945 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215949 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215953 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215957 | orchestrator | 2026-03-05 01:00:47.215960 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-05 01:00:47.215964 | orchestrator | Thursday 05 March 2026 00:52:32 +0000 (0:00:00.745) 0:03:31.737 ******** 2026-03-05 01:00:47.215968 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.215972 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.215976 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.215979 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.215983 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.215987 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.215991 | orchestrator | 2026-03-05 01:00:47.215994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-05 01:00:47.215998 | orchestrator | Thursday 05 March 2026 00:52:33 +0000 (0:00:00.703) 0:03:32.441 ******** 2026-03-05 01:00:47.216002 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216006 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.216010 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.216013 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.216020 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.216024 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.216028 | orchestrator | 2026-03-05 01:00:47.216032 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-05 01:00:47.216036 | orchestrator | Thursday 05 March 2026 00:52:33 +0000 (0:00:00.646) 0:03:33.088 ******** 2026-03-05 01:00:47.216039 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216043 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.216047 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.216051 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.216054 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.216058 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.216062 | orchestrator | 2026-03-05 01:00:47.216066 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-05 01:00:47.216070 | orchestrator | Thursday 05 March 2026 00:52:34 +0000 (0:00:00.595) 0:03:33.683 ******** 2026-03-05 01:00:47.216074 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.216077 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.216081 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.216085 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.216089 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.216092 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.216096 | orchestrator | 2026-03-05 01:00:47.216100 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-05 01:00:47.216104 | orchestrator | Thursday 05 March 2026 00:52:35 +0000 (0:00:00.740) 0:03:34.424 ******** 2026-03-05 01:00:47.216108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.216112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.216116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.216119 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216123 | orchestrator | 2026-03-05 01:00:47.216127 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-05 01:00:47.216148 | orchestrator | Thursday 05 March 2026 00:52:35 +0000 (0:00:00.350) 0:03:34.774 ******** 2026-03-05 01:00:47.216152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.216156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.216160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.216164 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216168 | orchestrator | 2026-03-05 01:00:47.216172 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-05 01:00:47.216176 | orchestrator | Thursday 05 March 2026 00:52:35 +0000 (0:00:00.362) 0:03:35.136 ******** 2026-03-05 01:00:47.216180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.216183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.216187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.216191 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216195 | orchestrator | 2026-03-05 01:00:47.216199 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-05 01:00:47.216203 | orchestrator | Thursday 05 March 2026 00:52:36 +0000 (0:00:00.432) 0:03:35.569 ******** 2026-03-05 01:00:47.216209 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.216213 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.216216 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.216220 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.216224 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.216228 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.216232 | orchestrator | 2026-03-05 01:00:47.216235 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-05 01:00:47.216239 | orchestrator | Thursday 05 March 2026 00:52:36 +0000 (0:00:00.762) 0:03:36.331 ******** 2026-03-05 01:00:47.216247 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-05 01:00:47.216251 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-05 01:00:47.216254 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-05 01:00:47.216258 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.216262 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-05 01:00:47.216266 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-05 01:00:47.216269 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.216273 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-05 01:00:47.216277 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.216281 | orchestrator | 2026-03-05 01:00:47.216284 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-05 01:00:47.216288 | orchestrator | Thursday 05 March 2026 00:52:39 +0000 (0:00:02.070) 0:03:38.401 ******** 2026-03-05 01:00:47.216292 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.216296 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.216300 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.216303 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.216307 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.216311 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.216315 | orchestrator | 2026-03-05 01:00:47.216318 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 01:00:47.216322 | orchestrator | Thursday 05 March 2026 00:52:41 +0000 (0:00:02.429) 0:03:40.831 ******** 2026-03-05 01:00:47.216326 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.216330 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.216333 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.216337 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.216341 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.216345 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.216349 | orchestrator | 2026-03-05 01:00:47.216352 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-05 01:00:47.216356 | orchestrator | Thursday 05 March 2026 00:52:42 +0000 (0:00:01.065) 0:03:41.897 ******** 2026-03-05 01:00:47.216360 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216364 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.216367 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.216371 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.216375 | orchestrator | 2026-03-05 01:00:47.216379 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-05 01:00:47.216383 | orchestrator | Thursday 05 March 2026 00:52:43 +0000 (0:00:00.890) 0:03:42.787 ******** 2026-03-05 01:00:47.216386 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.216390 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.216394 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.216398 | orchestrator | 2026-03-05 01:00:47.216402 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-05 01:00:47.216405 | orchestrator | Thursday 05 March 2026 00:52:43 +0000 (0:00:00.290) 0:03:43.078 ******** 2026-03-05 01:00:47.216409 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.216413 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.216417 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.216421 | orchestrator | 2026-03-05 01:00:47.216425 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-05 01:00:47.216429 | orchestrator | Thursday 05 March 2026 00:52:45 +0000 (0:00:01.419) 0:03:44.497 ******** 2026-03-05 01:00:47.216433 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 01:00:47.216436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 01:00:47.216440 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 01:00:47.216444 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.216448 | orchestrator | 2026-03-05 01:00:47.216455 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-05 01:00:47.216459 | orchestrator | Thursday 05 March 2026 00:52:45 +0000 (0:00:00.780) 0:03:45.278 ******** 2026-03-05 01:00:47.216463 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.216467 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.216470 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.216474 | orchestrator | 2026-03-05 01:00:47.216478 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-05 01:00:47.216485 | orchestrator | Thursday 05 March 2026 00:52:46 +0000 (0:00:00.335) 0:03:45.614 ******** 2026-03-05 01:00:47.216489 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.216493 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.216496 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.216500 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.216504 | orchestrator | 2026-03-05 01:00:47.216508 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-05 01:00:47.216512 | orchestrator | Thursday 05 March 2026 00:52:47 +0000 (0:00:00.827) 0:03:46.441 ******** 2026-03-05 01:00:47.216516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.216519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.216523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.216527 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216531 | orchestrator | 2026-03-05 01:00:47.216535 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-05 01:00:47.216539 | orchestrator | Thursday 05 March 2026 00:52:47 +0000 (0:00:00.362) 0:03:46.804 ******** 2026-03-05 01:00:47.216545 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216549 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.216553 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.216556 | orchestrator | 2026-03-05 01:00:47.216560 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-05 01:00:47.216564 | orchestrator | Thursday 05 March 2026 00:52:47 +0000 (0:00:00.293) 0:03:47.098 ******** 2026-03-05 01:00:47.216568 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216572 | orchestrator | 2026-03-05 01:00:47.216576 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-05 01:00:47.216579 | orchestrator | Thursday 05 March 2026 00:52:47 +0000 (0:00:00.191) 0:03:47.290 ******** 2026-03-05 01:00:47.216583 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216587 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.216591 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.216595 | orchestrator | 2026-03-05 01:00:47.216599 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-05 01:00:47.216602 | orchestrator | Thursday 05 March 2026 00:52:48 +0000 (0:00:00.328) 0:03:47.619 ******** 2026-03-05 01:00:47.216606 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216610 | orchestrator | 2026-03-05 01:00:47.216614 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-05 01:00:47.216618 | orchestrator | Thursday 05 March 2026 00:52:48 +0000 (0:00:00.212) 0:03:47.831 ******** 2026-03-05 01:00:47.216622 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216625 | orchestrator | 2026-03-05 01:00:47.216629 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-05 01:00:47.216633 | orchestrator | Thursday 05 March 2026 00:52:48 +0000 (0:00:00.202) 0:03:48.034 ******** 2026-03-05 01:00:47.216637 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216641 | orchestrator | 2026-03-05 01:00:47.216645 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-05 01:00:47.216648 | orchestrator | Thursday 05 March 2026 00:52:48 +0000 (0:00:00.105) 0:03:48.139 ******** 2026-03-05 01:00:47.216652 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216659 | orchestrator | 2026-03-05 01:00:47.216663 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-05 01:00:47.216667 | orchestrator | Thursday 05 March 2026 00:52:49 +0000 (0:00:00.537) 0:03:48.677 ******** 2026-03-05 01:00:47.216670 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216674 | orchestrator | 2026-03-05 01:00:47.216678 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-05 01:00:47.216682 | orchestrator | Thursday 05 March 2026 00:52:49 +0000 (0:00:00.199) 0:03:48.877 ******** 2026-03-05 01:00:47.216686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.216689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.216693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.216697 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216701 | orchestrator | 2026-03-05 01:00:47.216705 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-05 01:00:47.216708 | orchestrator | Thursday 05 March 2026 00:52:49 +0000 (0:00:00.386) 0:03:49.264 ******** 2026-03-05 01:00:47.216712 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216716 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.216720 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.216724 | orchestrator | 2026-03-05 01:00:47.216727 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-05 01:00:47.216731 | orchestrator | Thursday 05 March 2026 00:52:50 +0000 (0:00:00.297) 0:03:49.561 ******** 2026-03-05 01:00:47.216735 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216740 | orchestrator | 2026-03-05 01:00:47.216744 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-05 01:00:47.216747 | orchestrator | Thursday 05 March 2026 00:52:50 +0000 (0:00:00.198) 0:03:49.760 ******** 2026-03-05 01:00:47.216751 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216755 | orchestrator | 2026-03-05 01:00:47.216759 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-05 01:00:47.216763 | orchestrator | Thursday 05 March 2026 00:52:50 +0000 (0:00:00.180) 0:03:49.941 ******** 2026-03-05 01:00:47.216766 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.216770 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.216774 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.216778 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.216782 | orchestrator | 2026-03-05 01:00:47.216785 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-05 01:00:47.216789 | orchestrator | Thursday 05 March 2026 00:52:51 +0000 (0:00:00.962) 0:03:50.903 ******** 2026-03-05 01:00:47.216793 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.216799 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.216803 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.216807 | orchestrator | 2026-03-05 01:00:47.216811 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-05 01:00:47.216815 | orchestrator | Thursday 05 March 2026 00:52:51 +0000 (0:00:00.302) 0:03:51.206 ******** 2026-03-05 01:00:47.216818 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.216822 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.216826 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.216830 | orchestrator | 2026-03-05 01:00:47.216833 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-05 01:00:47.216837 | orchestrator | Thursday 05 March 2026 00:52:53 +0000 (0:00:01.307) 0:03:52.514 ******** 2026-03-05 01:00:47.216841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.216845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.216849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.216853 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216856 | orchestrator | 2026-03-05 01:00:47.216863 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-05 01:00:47.216870 | orchestrator | Thursday 05 March 2026 00:52:53 +0000 (0:00:00.739) 0:03:53.253 ******** 2026-03-05 01:00:47.216874 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.216878 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.216882 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.216885 | orchestrator | 2026-03-05 01:00:47.216889 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-05 01:00:47.216893 | orchestrator | Thursday 05 March 2026 00:52:54 +0000 (0:00:00.575) 0:03:53.829 ******** 2026-03-05 01:00:47.216897 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.216901 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.216905 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.216909 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.216912 | orchestrator | 2026-03-05 01:00:47.216916 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-05 01:00:47.216920 | orchestrator | Thursday 05 March 2026 00:52:55 +0000 (0:00:00.837) 0:03:54.666 ******** 2026-03-05 01:00:47.216924 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.216928 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.216932 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.216936 | orchestrator | 2026-03-05 01:00:47.216939 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-05 01:00:47.216943 | orchestrator | Thursday 05 March 2026 00:52:55 +0000 (0:00:00.552) 0:03:55.218 ******** 2026-03-05 01:00:47.216948 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.216951 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.216955 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.216959 | orchestrator | 2026-03-05 01:00:47.216963 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-05 01:00:47.216967 | orchestrator | Thursday 05 March 2026 00:52:57 +0000 (0:00:01.443) 0:03:56.662 ******** 2026-03-05 01:00:47.216970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.216974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.216978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.216982 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.216986 | orchestrator | 2026-03-05 01:00:47.216989 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-05 01:00:47.216993 | orchestrator | Thursday 05 March 2026 00:52:57 +0000 (0:00:00.667) 0:03:57.329 ******** 2026-03-05 01:00:47.216997 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.217001 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.217005 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.217008 | orchestrator | 2026-03-05 01:00:47.217013 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-05 01:00:47.217016 | orchestrator | Thursday 05 March 2026 00:52:58 +0000 (0:00:00.424) 0:03:57.754 ******** 2026-03-05 01:00:47.217020 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.217024 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.217028 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.217032 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217035 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217039 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217043 | orchestrator | 2026-03-05 01:00:47.217047 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-05 01:00:47.217051 | orchestrator | Thursday 05 March 2026 00:52:59 +0000 (0:00:00.892) 0:03:58.646 ******** 2026-03-05 01:00:47.217054 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.217058 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.217062 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.217066 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.217074 | orchestrator | 2026-03-05 01:00:47.217078 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-05 01:00:47.217082 | orchestrator | Thursday 05 March 2026 00:53:00 +0000 (0:00:00.898) 0:03:59.544 ******** 2026-03-05 01:00:47.217085 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217089 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217093 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217097 | orchestrator | 2026-03-05 01:00:47.217101 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-05 01:00:47.217104 | orchestrator | Thursday 05 March 2026 00:53:00 +0000 (0:00:00.611) 0:04:00.156 ******** 2026-03-05 01:00:47.217108 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.217112 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.217116 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.217119 | orchestrator | 2026-03-05 01:00:47.217123 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-05 01:00:47.217127 | orchestrator | Thursday 05 March 2026 00:53:02 +0000 (0:00:01.462) 0:04:01.618 ******** 2026-03-05 01:00:47.217149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 01:00:47.217153 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 01:00:47.217157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 01:00:47.217161 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217165 | orchestrator | 2026-03-05 01:00:47.217169 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-05 01:00:47.217173 | orchestrator | Thursday 05 March 2026 00:53:02 +0000 (0:00:00.658) 0:04:02.276 ******** 2026-03-05 01:00:47.217176 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217180 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217184 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217188 | orchestrator | 2026-03-05 01:00:47.217191 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-05 01:00:47.217195 | orchestrator | 2026-03-05 01:00:47.217199 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 01:00:47.217203 | orchestrator | Thursday 05 March 2026 00:53:03 +0000 (0:00:00.957) 0:04:03.234 ******** 2026-03-05 01:00:47.217207 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.217211 | orchestrator | 2026-03-05 01:00:47.217217 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 01:00:47.217221 | orchestrator | Thursday 05 March 2026 00:53:04 +0000 (0:00:00.501) 0:04:03.735 ******** 2026-03-05 01:00:47.217225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.217230 | orchestrator | 2026-03-05 01:00:47.217233 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 01:00:47.217237 | orchestrator | Thursday 05 March 2026 00:53:04 +0000 (0:00:00.522) 0:04:04.257 ******** 2026-03-05 01:00:47.217241 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217245 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217249 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217252 | orchestrator | 2026-03-05 01:00:47.217256 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 01:00:47.217260 | orchestrator | Thursday 05 March 2026 00:53:06 +0000 (0:00:01.093) 0:04:05.351 ******** 2026-03-05 01:00:47.217264 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217268 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217271 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217275 | orchestrator | 2026-03-05 01:00:47.217279 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 01:00:47.217283 | orchestrator | Thursday 05 March 2026 00:53:06 +0000 (0:00:00.319) 0:04:05.671 ******** 2026-03-05 01:00:47.217290 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217294 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217298 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217301 | orchestrator | 2026-03-05 01:00:47.217305 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 01:00:47.217309 | orchestrator | Thursday 05 March 2026 00:53:06 +0000 (0:00:00.332) 0:04:06.003 ******** 2026-03-05 01:00:47.217313 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217317 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217321 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217324 | orchestrator | 2026-03-05 01:00:47.217328 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 01:00:47.217332 | orchestrator | Thursday 05 March 2026 00:53:06 +0000 (0:00:00.318) 0:04:06.322 ******** 2026-03-05 01:00:47.217336 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217340 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217344 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217347 | orchestrator | 2026-03-05 01:00:47.217351 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 01:00:47.217355 | orchestrator | Thursday 05 March 2026 00:53:08 +0000 (0:00:01.115) 0:04:07.438 ******** 2026-03-05 01:00:47.217359 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217362 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217366 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217370 | orchestrator | 2026-03-05 01:00:47.217374 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 01:00:47.217378 | orchestrator | Thursday 05 March 2026 00:53:08 +0000 (0:00:00.344) 0:04:07.783 ******** 2026-03-05 01:00:47.217381 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217385 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217389 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217393 | orchestrator | 2026-03-05 01:00:47.217396 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 01:00:47.217400 | orchestrator | Thursday 05 March 2026 00:53:08 +0000 (0:00:00.337) 0:04:08.120 ******** 2026-03-05 01:00:47.217404 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217408 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217412 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217415 | orchestrator | 2026-03-05 01:00:47.217419 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 01:00:47.217423 | orchestrator | Thursday 05 March 2026 00:53:09 +0000 (0:00:00.756) 0:04:08.877 ******** 2026-03-05 01:00:47.217427 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217431 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217435 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217438 | orchestrator | 2026-03-05 01:00:47.217442 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 01:00:47.217446 | orchestrator | Thursday 05 March 2026 00:53:10 +0000 (0:00:01.076) 0:04:09.953 ******** 2026-03-05 01:00:47.217450 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217454 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217457 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217461 | orchestrator | 2026-03-05 01:00:47.217465 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 01:00:47.217469 | orchestrator | Thursday 05 March 2026 00:53:10 +0000 (0:00:00.326) 0:04:10.280 ******** 2026-03-05 01:00:47.217473 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217476 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217480 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217484 | orchestrator | 2026-03-05 01:00:47.217488 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 01:00:47.217492 | orchestrator | Thursday 05 March 2026 00:53:11 +0000 (0:00:00.314) 0:04:10.594 ******** 2026-03-05 01:00:47.217495 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217499 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217508 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217512 | orchestrator | 2026-03-05 01:00:47.217516 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 01:00:47.217526 | orchestrator | Thursday 05 March 2026 00:53:11 +0000 (0:00:00.266) 0:04:10.861 ******** 2026-03-05 01:00:47.217530 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217534 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217538 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217542 | orchestrator | 2026-03-05 01:00:47.217546 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 01:00:47.217556 | orchestrator | Thursday 05 March 2026 00:53:11 +0000 (0:00:00.479) 0:04:11.340 ******** 2026-03-05 01:00:47.217560 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217563 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217567 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217571 | orchestrator | 2026-03-05 01:00:47.217578 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 01:00:47.217582 | orchestrator | Thursday 05 March 2026 00:53:12 +0000 (0:00:00.259) 0:04:11.600 ******** 2026-03-05 01:00:47.217586 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217590 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217593 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217597 | orchestrator | 2026-03-05 01:00:47.217601 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 01:00:47.217605 | orchestrator | Thursday 05 March 2026 00:53:12 +0000 (0:00:00.340) 0:04:11.940 ******** 2026-03-05 01:00:47.217608 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217612 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.217616 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.217620 | orchestrator | 2026-03-05 01:00:47.217624 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 01:00:47.217627 | orchestrator | Thursday 05 March 2026 00:53:12 +0000 (0:00:00.324) 0:04:12.265 ******** 2026-03-05 01:00:47.217631 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217635 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217639 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217643 | orchestrator | 2026-03-05 01:00:47.217646 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 01:00:47.217650 | orchestrator | Thursday 05 March 2026 00:53:13 +0000 (0:00:00.298) 0:04:12.564 ******** 2026-03-05 01:00:47.217654 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217658 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217662 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217665 | orchestrator | 2026-03-05 01:00:47.217669 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 01:00:47.217673 | orchestrator | Thursday 05 March 2026 00:53:13 +0000 (0:00:00.464) 0:04:13.028 ******** 2026-03-05 01:00:47.217677 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217680 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217684 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217688 | orchestrator | 2026-03-05 01:00:47.217692 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-05 01:00:47.217695 | orchestrator | Thursday 05 March 2026 00:53:14 +0000 (0:00:00.468) 0:04:13.497 ******** 2026-03-05 01:00:47.217699 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217703 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217707 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217710 | orchestrator | 2026-03-05 01:00:47.217714 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-05 01:00:47.217718 | orchestrator | Thursday 05 March 2026 00:53:14 +0000 (0:00:00.307) 0:04:13.804 ******** 2026-03-05 01:00:47.217741 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.217745 | orchestrator | 2026-03-05 01:00:47.217749 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-05 01:00:47.217756 | orchestrator | Thursday 05 March 2026 00:53:15 +0000 (0:00:00.739) 0:04:14.544 ******** 2026-03-05 01:00:47.217760 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.217764 | orchestrator | 2026-03-05 01:00:47.217768 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-05 01:00:47.217772 | orchestrator | Thursday 05 March 2026 00:53:15 +0000 (0:00:00.150) 0:04:14.695 ******** 2026-03-05 01:00:47.217775 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-05 01:00:47.217779 | orchestrator | 2026-03-05 01:00:47.217783 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-05 01:00:47.217787 | orchestrator | Thursday 05 March 2026 00:53:16 +0000 (0:00:00.888) 0:04:15.584 ******** 2026-03-05 01:00:47.217791 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217794 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217798 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217802 | orchestrator | 2026-03-05 01:00:47.217806 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-05 01:00:47.217810 | orchestrator | Thursday 05 March 2026 00:53:16 +0000 (0:00:00.313) 0:04:15.897 ******** 2026-03-05 01:00:47.217814 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217817 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217821 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217825 | orchestrator | 2026-03-05 01:00:47.217829 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-05 01:00:47.217832 | orchestrator | Thursday 05 March 2026 00:53:17 +0000 (0:00:00.455) 0:04:16.353 ******** 2026-03-05 01:00:47.217836 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.217840 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.217844 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.217847 | orchestrator | 2026-03-05 01:00:47.217851 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-05 01:00:47.217855 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:01.160) 0:04:17.514 ******** 2026-03-05 01:00:47.217859 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.217865 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.217869 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.217873 | orchestrator | 2026-03-05 01:00:47.217877 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-05 01:00:47.217880 | orchestrator | Thursday 05 March 2026 00:53:18 +0000 (0:00:00.810) 0:04:18.324 ******** 2026-03-05 01:00:47.217884 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.217888 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.217892 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.217896 | orchestrator | 2026-03-05 01:00:47.217899 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-05 01:00:47.217903 | orchestrator | Thursday 05 March 2026 00:53:19 +0000 (0:00:00.699) 0:04:19.024 ******** 2026-03-05 01:00:47.217907 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217911 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.217915 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.217918 | orchestrator | 2026-03-05 01:00:47.217922 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-05 01:00:47.217926 | orchestrator | Thursday 05 March 2026 00:53:20 +0000 (0:00:00.726) 0:04:19.751 ******** 2026-03-05 01:00:47.217930 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.217934 | orchestrator | 2026-03-05 01:00:47.217940 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-05 01:00:47.217944 | orchestrator | Thursday 05 March 2026 00:53:22 +0000 (0:00:01.615) 0:04:21.367 ******** 2026-03-05 01:00:47.217948 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.217952 | orchestrator | 2026-03-05 01:00:47.217956 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-05 01:00:47.217960 | orchestrator | Thursday 05 March 2026 00:53:22 +0000 (0:00:00.690) 0:04:22.058 ******** 2026-03-05 01:00:47.217967 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:00:47.217972 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.217979 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.217985 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:00:47.217991 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:00:47.217999 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-05 01:00:47.218005 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:00:47.218011 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-05 01:00:47.218043 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-05 01:00:47.218049 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-05 01:00:47.218055 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:00:47.218062 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-05 01:00:47.218068 | orchestrator | 2026-03-05 01:00:47.218074 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-05 01:00:47.218081 | orchestrator | Thursday 05 March 2026 00:53:26 +0000 (0:00:04.181) 0:04:26.239 ******** 2026-03-05 01:00:47.218089 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.218093 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.218096 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.218100 | orchestrator | 2026-03-05 01:00:47.218104 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-05 01:00:47.218108 | orchestrator | Thursday 05 March 2026 00:53:28 +0000 (0:00:01.598) 0:04:27.838 ******** 2026-03-05 01:00:47.218112 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218116 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218119 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218123 | orchestrator | 2026-03-05 01:00:47.218127 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-05 01:00:47.218131 | orchestrator | Thursday 05 March 2026 00:53:28 +0000 (0:00:00.354) 0:04:28.193 ******** 2026-03-05 01:00:47.218147 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218151 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218155 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218159 | orchestrator | 2026-03-05 01:00:47.218163 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-05 01:00:47.218167 | orchestrator | Thursday 05 March 2026 00:53:29 +0000 (0:00:00.685) 0:04:28.878 ******** 2026-03-05 01:00:47.218170 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.218174 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.218178 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.218182 | orchestrator | 2026-03-05 01:00:47.218186 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-05 01:00:47.218190 | orchestrator | Thursday 05 March 2026 00:53:31 +0000 (0:00:01.642) 0:04:30.521 ******** 2026-03-05 01:00:47.218194 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.218198 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.218201 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.218205 | orchestrator | 2026-03-05 01:00:47.218209 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-05 01:00:47.218213 | orchestrator | Thursday 05 March 2026 00:53:32 +0000 (0:00:01.229) 0:04:31.751 ******** 2026-03-05 01:00:47.218217 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218220 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218224 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218228 | orchestrator | 2026-03-05 01:00:47.218232 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-05 01:00:47.218236 | orchestrator | Thursday 05 March 2026 00:53:32 +0000 (0:00:00.362) 0:04:32.114 ******** 2026-03-05 01:00:47.218239 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.218249 | orchestrator | 2026-03-05 01:00:47.218253 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-05 01:00:47.218257 | orchestrator | Thursday 05 March 2026 00:53:33 +0000 (0:00:00.660) 0:04:32.774 ******** 2026-03-05 01:00:47.218261 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218265 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218271 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218275 | orchestrator | 2026-03-05 01:00:47.218279 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-05 01:00:47.218283 | orchestrator | Thursday 05 March 2026 00:53:33 +0000 (0:00:00.331) 0:04:33.106 ******** 2026-03-05 01:00:47.218287 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218290 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218294 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218298 | orchestrator | 2026-03-05 01:00:47.218302 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-05 01:00:47.218306 | orchestrator | Thursday 05 March 2026 00:53:34 +0000 (0:00:00.375) 0:04:33.482 ******** 2026-03-05 01:00:47.218310 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.218313 | orchestrator | 2026-03-05 01:00:47.218317 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-05 01:00:47.218321 | orchestrator | Thursday 05 March 2026 00:53:34 +0000 (0:00:00.631) 0:04:34.113 ******** 2026-03-05 01:00:47.218325 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.218329 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.218343 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.218347 | orchestrator | 2026-03-05 01:00:47.218351 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-05 01:00:47.218355 | orchestrator | Thursday 05 March 2026 00:53:36 +0000 (0:00:01.491) 0:04:35.605 ******** 2026-03-05 01:00:47.218358 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.218362 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.218366 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.218370 | orchestrator | 2026-03-05 01:00:47.218374 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-05 01:00:47.218377 | orchestrator | Thursday 05 March 2026 00:53:37 +0000 (0:00:01.306) 0:04:36.912 ******** 2026-03-05 01:00:47.218381 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.218385 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.218389 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.218392 | orchestrator | 2026-03-05 01:00:47.218396 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-05 01:00:47.218400 | orchestrator | Thursday 05 March 2026 00:53:39 +0000 (0:00:01.947) 0:04:38.859 ******** 2026-03-05 01:00:47.218404 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.218408 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.218412 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.218415 | orchestrator | 2026-03-05 01:00:47.218419 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-05 01:00:47.218423 | orchestrator | Thursday 05 March 2026 00:53:41 +0000 (0:00:02.468) 0:04:41.328 ******** 2026-03-05 01:00:47.218427 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.218431 | orchestrator | 2026-03-05 01:00:47.218435 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-05 01:00:47.218438 | orchestrator | Thursday 05 March 2026 00:53:42 +0000 (0:00:00.526) 0:04:41.854 ******** 2026-03-05 01:00:47.218442 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-05 01:00:47.218446 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218450 | orchestrator | 2026-03-05 01:00:47.218454 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-05 01:00:47.218461 | orchestrator | Thursday 05 March 2026 00:54:04 +0000 (0:00:21.991) 0:05:03.846 ******** 2026-03-05 01:00:47.218465 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218468 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218472 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218476 | orchestrator | 2026-03-05 01:00:47.218480 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-05 01:00:47.218484 | orchestrator | Thursday 05 March 2026 00:54:13 +0000 (0:00:09.181) 0:05:13.027 ******** 2026-03-05 01:00:47.218488 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218491 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218495 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218499 | orchestrator | 2026-03-05 01:00:47.218503 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-05 01:00:47.218507 | orchestrator | Thursday 05 March 2026 00:54:14 +0000 (0:00:00.584) 0:05:13.611 ******** 2026-03-05 01:00:47.218512 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8016b9c679d0adf03dbfd560c527052319425b7d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-05 01:00:47.218517 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8016b9c679d0adf03dbfd560c527052319425b7d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-05 01:00:47.218526 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8016b9c679d0adf03dbfd560c527052319425b7d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-05 01:00:47.218531 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8016b9c679d0adf03dbfd560c527052319425b7d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-05 01:00:47.218536 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8016b9c679d0adf03dbfd560c527052319425b7d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-05 01:00:47.218543 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8016b9c679d0adf03dbfd560c527052319425b7d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__8016b9c679d0adf03dbfd560c527052319425b7d'}])  2026-03-05 01:00:47.218549 | orchestrator | 2026-03-05 01:00:47.218553 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 01:00:47.218557 | orchestrator | Thursday 05 March 2026 00:54:29 +0000 (0:00:15.518) 0:05:29.130 ******** 2026-03-05 01:00:47.218561 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218565 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218568 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218572 | orchestrator | 2026-03-05 01:00:47.218576 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-05 01:00:47.218583 | orchestrator | Thursday 05 March 2026 00:54:30 +0000 (0:00:00.351) 0:05:29.482 ******** 2026-03-05 01:00:47.218587 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.218591 | orchestrator | 2026-03-05 01:00:47.218595 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-05 01:00:47.218599 | orchestrator | Thursday 05 March 2026 00:54:30 +0000 (0:00:00.852) 0:05:30.334 ******** 2026-03-05 01:00:47.218603 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218607 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218610 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218614 | orchestrator | 2026-03-05 01:00:47.218618 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-05 01:00:47.218622 | orchestrator | Thursday 05 March 2026 00:54:31 +0000 (0:00:00.417) 0:05:30.752 ******** 2026-03-05 01:00:47.218626 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218629 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218633 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218637 | orchestrator | 2026-03-05 01:00:47.218641 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-05 01:00:47.218645 | orchestrator | Thursday 05 March 2026 00:54:31 +0000 (0:00:00.336) 0:05:31.089 ******** 2026-03-05 01:00:47.218649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 01:00:47.218652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 01:00:47.218656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 01:00:47.218660 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218664 | orchestrator | 2026-03-05 01:00:47.218668 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-05 01:00:47.218671 | orchestrator | Thursday 05 March 2026 00:54:32 +0000 (0:00:01.199) 0:05:32.288 ******** 2026-03-05 01:00:47.218675 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218679 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218683 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218687 | orchestrator | 2026-03-05 01:00:47.218691 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-05 01:00:47.218694 | orchestrator | 2026-03-05 01:00:47.218698 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 01:00:47.218702 | orchestrator | Thursday 05 March 2026 00:54:33 +0000 (0:00:00.645) 0:05:32.933 ******** 2026-03-05 01:00:47.218706 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.218710 | orchestrator | 2026-03-05 01:00:47.218714 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 01:00:47.218717 | orchestrator | Thursday 05 March 2026 00:54:34 +0000 (0:00:00.491) 0:05:33.425 ******** 2026-03-05 01:00:47.218721 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.218725 | orchestrator | 2026-03-05 01:00:47.218729 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 01:00:47.218733 | orchestrator | Thursday 05 March 2026 00:54:34 +0000 (0:00:00.829) 0:05:34.254 ******** 2026-03-05 01:00:47.218736 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218740 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218744 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218748 | orchestrator | 2026-03-05 01:00:47.218752 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 01:00:47.218756 | orchestrator | Thursday 05 March 2026 00:54:35 +0000 (0:00:00.847) 0:05:35.102 ******** 2026-03-05 01:00:47.218762 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218766 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218770 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218776 | orchestrator | 2026-03-05 01:00:47.218780 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 01:00:47.218784 | orchestrator | Thursday 05 March 2026 00:54:36 +0000 (0:00:00.283) 0:05:35.386 ******** 2026-03-05 01:00:47.218788 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218792 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218795 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218799 | orchestrator | 2026-03-05 01:00:47.218803 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 01:00:47.218807 | orchestrator | Thursday 05 March 2026 00:54:36 +0000 (0:00:00.575) 0:05:35.961 ******** 2026-03-05 01:00:47.218811 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218814 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218818 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218822 | orchestrator | 2026-03-05 01:00:47.218826 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 01:00:47.218830 | orchestrator | Thursday 05 March 2026 00:54:36 +0000 (0:00:00.302) 0:05:36.264 ******** 2026-03-05 01:00:47.218834 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218840 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218844 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218848 | orchestrator | 2026-03-05 01:00:47.218851 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 01:00:47.218855 | orchestrator | Thursday 05 March 2026 00:54:37 +0000 (0:00:00.832) 0:05:37.096 ******** 2026-03-05 01:00:47.218859 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218863 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218867 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218870 | orchestrator | 2026-03-05 01:00:47.218874 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 01:00:47.218878 | orchestrator | Thursday 05 March 2026 00:54:38 +0000 (0:00:00.401) 0:05:37.498 ******** 2026-03-05 01:00:47.218882 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218886 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218890 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218894 | orchestrator | 2026-03-05 01:00:47.218898 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 01:00:47.218901 | orchestrator | Thursday 05 March 2026 00:54:38 +0000 (0:00:00.589) 0:05:38.088 ******** 2026-03-05 01:00:47.218905 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218909 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218913 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218917 | orchestrator | 2026-03-05 01:00:47.218920 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 01:00:47.218924 | orchestrator | Thursday 05 March 2026 00:54:39 +0000 (0:00:00.746) 0:05:38.835 ******** 2026-03-05 01:00:47.218928 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218932 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218936 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218940 | orchestrator | 2026-03-05 01:00:47.218943 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 01:00:47.218947 | orchestrator | Thursday 05 March 2026 00:54:40 +0000 (0:00:00.823) 0:05:39.659 ******** 2026-03-05 01:00:47.218951 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.218955 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.218959 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.218962 | orchestrator | 2026-03-05 01:00:47.218966 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 01:00:47.218970 | orchestrator | Thursday 05 March 2026 00:54:40 +0000 (0:00:00.356) 0:05:40.016 ******** 2026-03-05 01:00:47.218974 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.218978 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.218982 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.218986 | orchestrator | 2026-03-05 01:00:47.218989 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 01:00:47.218997 | orchestrator | Thursday 05 March 2026 00:54:41 +0000 (0:00:00.632) 0:05:40.648 ******** 2026-03-05 01:00:47.219000 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219004 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219008 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219012 | orchestrator | 2026-03-05 01:00:47.219016 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 01:00:47.219019 | orchestrator | Thursday 05 March 2026 00:54:41 +0000 (0:00:00.322) 0:05:40.971 ******** 2026-03-05 01:00:47.219023 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219027 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219031 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219035 | orchestrator | 2026-03-05 01:00:47.219039 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 01:00:47.219042 | orchestrator | Thursday 05 March 2026 00:54:41 +0000 (0:00:00.314) 0:05:41.286 ******** 2026-03-05 01:00:47.219046 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219050 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219054 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219058 | orchestrator | 2026-03-05 01:00:47.219061 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 01:00:47.219065 | orchestrator | Thursday 05 March 2026 00:54:42 +0000 (0:00:00.313) 0:05:41.599 ******** 2026-03-05 01:00:47.219069 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219073 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219077 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219080 | orchestrator | 2026-03-05 01:00:47.219084 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 01:00:47.219088 | orchestrator | Thursday 05 March 2026 00:54:42 +0000 (0:00:00.320) 0:05:41.919 ******** 2026-03-05 01:00:47.219092 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219096 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219099 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219103 | orchestrator | 2026-03-05 01:00:47.219107 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 01:00:47.219111 | orchestrator | Thursday 05 March 2026 00:54:43 +0000 (0:00:00.602) 0:05:42.522 ******** 2026-03-05 01:00:47.219115 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.219121 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.219125 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.219129 | orchestrator | 2026-03-05 01:00:47.219161 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 01:00:47.219166 | orchestrator | Thursday 05 March 2026 00:54:43 +0000 (0:00:00.391) 0:05:42.913 ******** 2026-03-05 01:00:47.219170 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.219174 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.219178 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.219182 | orchestrator | 2026-03-05 01:00:47.219186 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 01:00:47.219190 | orchestrator | Thursday 05 March 2026 00:54:43 +0000 (0:00:00.362) 0:05:43.276 ******** 2026-03-05 01:00:47.219194 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.219198 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.219202 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.219206 | orchestrator | 2026-03-05 01:00:47.219210 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-05 01:00:47.219215 | orchestrator | Thursday 05 March 2026 00:54:44 +0000 (0:00:00.776) 0:05:44.053 ******** 2026-03-05 01:00:47.219219 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-05 01:00:47.219226 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:47.219230 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:47.219234 | orchestrator | 2026-03-05 01:00:47.219239 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-05 01:00:47.219246 | orchestrator | Thursday 05 March 2026 00:54:45 +0000 (0:00:00.678) 0:05:44.731 ******** 2026-03-05 01:00:47.219251 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.219255 | orchestrator | 2026-03-05 01:00:47.219259 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-05 01:00:47.219263 | orchestrator | Thursday 05 March 2026 00:54:45 +0000 (0:00:00.581) 0:05:45.313 ******** 2026-03-05 01:00:47.219267 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.219271 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.219275 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.219279 | orchestrator | 2026-03-05 01:00:47.219283 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-05 01:00:47.219287 | orchestrator | Thursday 05 March 2026 00:54:46 +0000 (0:00:00.680) 0:05:45.994 ******** 2026-03-05 01:00:47.219291 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219296 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219300 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219304 | orchestrator | 2026-03-05 01:00:47.219308 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-05 01:00:47.219312 | orchestrator | Thursday 05 March 2026 00:54:47 +0000 (0:00:00.524) 0:05:46.518 ******** 2026-03-05 01:00:47.219316 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:00:47.219320 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:00:47.219324 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:00:47.219328 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-05 01:00:47.219332 | orchestrator | 2026-03-05 01:00:47.219336 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-05 01:00:47.219340 | orchestrator | Thursday 05 March 2026 00:54:58 +0000 (0:00:10.896) 0:05:57.415 ******** 2026-03-05 01:00:47.219344 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.219348 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.219352 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.219356 | orchestrator | 2026-03-05 01:00:47.219360 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-05 01:00:47.219364 | orchestrator | Thursday 05 March 2026 00:54:58 +0000 (0:00:00.364) 0:05:57.779 ******** 2026-03-05 01:00:47.219368 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-05 01:00:47.219372 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-05 01:00:47.219376 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-05 01:00:47.219380 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-05 01:00:47.219384 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.219389 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.219393 | orchestrator | 2026-03-05 01:00:47.219397 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-05 01:00:47.219401 | orchestrator | Thursday 05 March 2026 00:55:00 +0000 (0:00:02.243) 0:06:00.023 ******** 2026-03-05 01:00:47.219405 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-05 01:00:47.219409 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-05 01:00:47.219413 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-05 01:00:47.219417 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:00:47.219421 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-05 01:00:47.219425 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-05 01:00:47.219429 | orchestrator | 2026-03-05 01:00:47.219433 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-05 01:00:47.219437 | orchestrator | Thursday 05 March 2026 00:55:02 +0000 (0:00:01.329) 0:06:01.353 ******** 2026-03-05 01:00:47.219441 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.219451 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.219455 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.219459 | orchestrator | 2026-03-05 01:00:47.219464 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-05 01:00:47.219468 | orchestrator | Thursday 05 March 2026 00:55:03 +0000 (0:00:01.038) 0:06:02.392 ******** 2026-03-05 01:00:47.219472 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219476 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219480 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219484 | orchestrator | 2026-03-05 01:00:47.219488 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-05 01:00:47.219507 | orchestrator | Thursday 05 March 2026 00:55:03 +0000 (0:00:00.356) 0:06:02.749 ******** 2026-03-05 01:00:47.219512 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219516 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219520 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219524 | orchestrator | 2026-03-05 01:00:47.219528 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-05 01:00:47.219532 | orchestrator | Thursday 05 March 2026 00:55:03 +0000 (0:00:00.309) 0:06:03.059 ******** 2026-03-05 01:00:47.219536 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.219540 | orchestrator | 2026-03-05 01:00:47.219544 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-05 01:00:47.219548 | orchestrator | Thursday 05 March 2026 00:55:04 +0000 (0:00:00.787) 0:06:03.846 ******** 2026-03-05 01:00:47.219552 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219557 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219561 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219565 | orchestrator | 2026-03-05 01:00:47.219569 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-05 01:00:47.219576 | orchestrator | Thursday 05 March 2026 00:55:04 +0000 (0:00:00.359) 0:06:04.205 ******** 2026-03-05 01:00:47.219580 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219584 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219588 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.219592 | orchestrator | 2026-03-05 01:00:47.219596 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-05 01:00:47.219600 | orchestrator | Thursday 05 March 2026 00:55:05 +0000 (0:00:00.358) 0:06:04.563 ******** 2026-03-05 01:00:47.219604 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.219608 | orchestrator | 2026-03-05 01:00:47.219612 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-05 01:00:47.219617 | orchestrator | Thursday 05 March 2026 00:55:06 +0000 (0:00:00.846) 0:06:05.410 ******** 2026-03-05 01:00:47.219621 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.219625 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.219629 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.219633 | orchestrator | 2026-03-05 01:00:47.219637 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-05 01:00:47.219641 | orchestrator | Thursday 05 March 2026 00:55:07 +0000 (0:00:01.425) 0:06:06.835 ******** 2026-03-05 01:00:47.219645 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.219649 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.219653 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.219657 | orchestrator | 2026-03-05 01:00:47.219661 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-05 01:00:47.219665 | orchestrator | Thursday 05 March 2026 00:55:08 +0000 (0:00:01.237) 0:06:08.073 ******** 2026-03-05 01:00:47.219669 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.219673 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.219677 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.219681 | orchestrator | 2026-03-05 01:00:47.219690 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-05 01:00:47.219695 | orchestrator | Thursday 05 March 2026 00:55:10 +0000 (0:00:01.894) 0:06:09.967 ******** 2026-03-05 01:00:47.219699 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.219703 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.219707 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.219711 | orchestrator | 2026-03-05 01:00:47.219715 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-05 01:00:47.219719 | orchestrator | Thursday 05 March 2026 00:55:12 +0000 (0:00:02.233) 0:06:12.200 ******** 2026-03-05 01:00:47.219723 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.219727 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.219731 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-05 01:00:47.219735 | orchestrator | 2026-03-05 01:00:47.219739 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-05 01:00:47.219743 | orchestrator | Thursday 05 March 2026 00:55:13 +0000 (0:00:00.415) 0:06:12.616 ******** 2026-03-05 01:00:47.219747 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-05 01:00:47.219751 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-05 01:00:47.219755 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-05 01:00:47.219760 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-05 01:00:47.219764 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-05 01:00:47.219768 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-05 01:00:47.219772 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.219776 | orchestrator | 2026-03-05 01:00:47.219780 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-05 01:00:47.219784 | orchestrator | Thursday 05 March 2026 00:55:49 +0000 (0:00:36.455) 0:06:49.072 ******** 2026-03-05 01:00:47.219788 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.219792 | orchestrator | 2026-03-05 01:00:47.219796 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-05 01:00:47.219800 | orchestrator | Thursday 05 March 2026 00:55:51 +0000 (0:00:01.310) 0:06:50.383 ******** 2026-03-05 01:00:47.219804 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.219808 | orchestrator | 2026-03-05 01:00:47.219815 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-05 01:00:47.219819 | orchestrator | Thursday 05 March 2026 00:55:51 +0000 (0:00:00.307) 0:06:50.690 ******** 2026-03-05 01:00:47.219823 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.219827 | orchestrator | 2026-03-05 01:00:47.219831 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-05 01:00:47.219835 | orchestrator | Thursday 05 March 2026 00:55:51 +0000 (0:00:00.157) 0:06:50.848 ******** 2026-03-05 01:00:47.219839 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-05 01:00:47.219843 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-05 01:00:47.219847 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-05 01:00:47.219851 | orchestrator | 2026-03-05 01:00:47.219855 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-05 01:00:47.219859 | orchestrator | Thursday 05 March 2026 00:55:58 +0000 (0:00:06.696) 0:06:57.545 ******** 2026-03-05 01:00:47.219863 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-05 01:00:47.219870 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-05 01:00:47.219878 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-05 01:00:47.219882 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-05 01:00:47.219886 | orchestrator | 2026-03-05 01:00:47.219890 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 01:00:47.219894 | orchestrator | Thursday 05 March 2026 00:56:03 +0000 (0:00:05.175) 0:07:02.720 ******** 2026-03-05 01:00:47.219898 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.219902 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.219906 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.219910 | orchestrator | 2026-03-05 01:00:47.219914 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-05 01:00:47.219918 | orchestrator | Thursday 05 March 2026 00:56:04 +0000 (0:00:00.650) 0:07:03.370 ******** 2026-03-05 01:00:47.219922 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.219926 | orchestrator | 2026-03-05 01:00:47.219930 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-05 01:00:47.219934 | orchestrator | Thursday 05 March 2026 00:56:04 +0000 (0:00:00.782) 0:07:04.152 ******** 2026-03-05 01:00:47.219938 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.219942 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.219946 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.219950 | orchestrator | 2026-03-05 01:00:47.219954 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-05 01:00:47.219958 | orchestrator | Thursday 05 March 2026 00:56:05 +0000 (0:00:00.332) 0:07:04.485 ******** 2026-03-05 01:00:47.219963 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.219967 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.219971 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.219975 | orchestrator | 2026-03-05 01:00:47.219979 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-05 01:00:47.219983 | orchestrator | Thursday 05 March 2026 00:56:06 +0000 (0:00:01.127) 0:07:05.613 ******** 2026-03-05 01:00:47.219987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-05 01:00:47.219991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-05 01:00:47.219995 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-05 01:00:47.219999 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.220003 | orchestrator | 2026-03-05 01:00:47.220007 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-05 01:00:47.220011 | orchestrator | Thursday 05 March 2026 00:56:07 +0000 (0:00:00.936) 0:07:06.549 ******** 2026-03-05 01:00:47.220016 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.220020 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.220024 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.220028 | orchestrator | 2026-03-05 01:00:47.220032 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-05 01:00:47.220036 | orchestrator | 2026-03-05 01:00:47.220040 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 01:00:47.220044 | orchestrator | Thursday 05 March 2026 00:56:07 +0000 (0:00:00.787) 0:07:07.337 ******** 2026-03-05 01:00:47.220048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.220052 | orchestrator | 2026-03-05 01:00:47.220056 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 01:00:47.220060 | orchestrator | Thursday 05 March 2026 00:56:08 +0000 (0:00:00.533) 0:07:07.870 ******** 2026-03-05 01:00:47.220064 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.220068 | orchestrator | 2026-03-05 01:00:47.220072 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 01:00:47.220079 | orchestrator | Thursday 05 March 2026 00:56:09 +0000 (0:00:00.834) 0:07:08.705 ******** 2026-03-05 01:00:47.220083 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220087 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220091 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220095 | orchestrator | 2026-03-05 01:00:47.220099 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 01:00:47.220103 | orchestrator | Thursday 05 March 2026 00:56:09 +0000 (0:00:00.324) 0:07:09.029 ******** 2026-03-05 01:00:47.220107 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220111 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220115 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220119 | orchestrator | 2026-03-05 01:00:47.220123 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 01:00:47.220131 | orchestrator | Thursday 05 March 2026 00:56:10 +0000 (0:00:00.693) 0:07:09.723 ******** 2026-03-05 01:00:47.220150 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220154 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220158 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220162 | orchestrator | 2026-03-05 01:00:47.220166 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 01:00:47.220170 | orchestrator | Thursday 05 March 2026 00:56:11 +0000 (0:00:00.705) 0:07:10.429 ******** 2026-03-05 01:00:47.220174 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220178 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220182 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220186 | orchestrator | 2026-03-05 01:00:47.220190 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 01:00:47.220194 | orchestrator | Thursday 05 March 2026 00:56:12 +0000 (0:00:01.144) 0:07:11.573 ******** 2026-03-05 01:00:47.220198 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220202 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220206 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220210 | orchestrator | 2026-03-05 01:00:47.220214 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 01:00:47.220219 | orchestrator | Thursday 05 March 2026 00:56:12 +0000 (0:00:00.313) 0:07:11.887 ******** 2026-03-05 01:00:47.220226 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220230 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220234 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220238 | orchestrator | 2026-03-05 01:00:47.220242 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 01:00:47.220246 | orchestrator | Thursday 05 March 2026 00:56:12 +0000 (0:00:00.329) 0:07:12.217 ******** 2026-03-05 01:00:47.220250 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220254 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220258 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220262 | orchestrator | 2026-03-05 01:00:47.220266 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 01:00:47.220270 | orchestrator | Thursday 05 March 2026 00:56:13 +0000 (0:00:00.323) 0:07:12.540 ******** 2026-03-05 01:00:47.220274 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220278 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220282 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220286 | orchestrator | 2026-03-05 01:00:47.220290 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 01:00:47.220294 | orchestrator | Thursday 05 March 2026 00:56:14 +0000 (0:00:01.174) 0:07:13.714 ******** 2026-03-05 01:00:47.220298 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220302 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220307 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220311 | orchestrator | 2026-03-05 01:00:47.220315 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 01:00:47.220319 | orchestrator | Thursday 05 March 2026 00:56:15 +0000 (0:00:00.730) 0:07:14.445 ******** 2026-03-05 01:00:47.220323 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220331 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220335 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220340 | orchestrator | 2026-03-05 01:00:47.220344 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 01:00:47.220348 | orchestrator | Thursday 05 March 2026 00:56:15 +0000 (0:00:00.367) 0:07:14.813 ******** 2026-03-05 01:00:47.220352 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220356 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220360 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220364 | orchestrator | 2026-03-05 01:00:47.220368 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 01:00:47.220372 | orchestrator | Thursday 05 March 2026 00:56:15 +0000 (0:00:00.299) 0:07:15.113 ******** 2026-03-05 01:00:47.220376 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220380 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220384 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220388 | orchestrator | 2026-03-05 01:00:47.220393 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 01:00:47.220397 | orchestrator | Thursday 05 March 2026 00:56:16 +0000 (0:00:00.571) 0:07:15.684 ******** 2026-03-05 01:00:47.220401 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220405 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220409 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220413 | orchestrator | 2026-03-05 01:00:47.220417 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 01:00:47.220421 | orchestrator | Thursday 05 March 2026 00:56:16 +0000 (0:00:00.473) 0:07:16.157 ******** 2026-03-05 01:00:47.220425 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220429 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220433 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220437 | orchestrator | 2026-03-05 01:00:47.220441 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 01:00:47.220445 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:00.340) 0:07:16.498 ******** 2026-03-05 01:00:47.220449 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220453 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220457 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220461 | orchestrator | 2026-03-05 01:00:47.220465 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 01:00:47.220469 | orchestrator | Thursday 05 March 2026 00:56:17 +0000 (0:00:00.287) 0:07:16.785 ******** 2026-03-05 01:00:47.220473 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220477 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220481 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220485 | orchestrator | 2026-03-05 01:00:47.220489 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 01:00:47.220493 | orchestrator | Thursday 05 March 2026 00:56:18 +0000 (0:00:00.625) 0:07:17.410 ******** 2026-03-05 01:00:47.220497 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220501 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220505 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220509 | orchestrator | 2026-03-05 01:00:47.220513 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 01:00:47.220517 | orchestrator | Thursday 05 March 2026 00:56:18 +0000 (0:00:00.342) 0:07:17.753 ******** 2026-03-05 01:00:47.220521 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220529 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220533 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220537 | orchestrator | 2026-03-05 01:00:47.220541 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 01:00:47.220545 | orchestrator | Thursday 05 March 2026 00:56:18 +0000 (0:00:00.449) 0:07:18.203 ******** 2026-03-05 01:00:47.220549 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220553 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220560 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220564 | orchestrator | 2026-03-05 01:00:47.220568 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-05 01:00:47.220572 | orchestrator | Thursday 05 March 2026 00:56:19 +0000 (0:00:00.761) 0:07:18.964 ******** 2026-03-05 01:00:47.220576 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220580 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220584 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220588 | orchestrator | 2026-03-05 01:00:47.220592 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-05 01:00:47.220596 | orchestrator | Thursday 05 March 2026 00:56:19 +0000 (0:00:00.349) 0:07:19.314 ******** 2026-03-05 01:00:47.220600 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:00:47.220607 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:00:47.220611 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:00:47.220615 | orchestrator | 2026-03-05 01:00:47.220619 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-05 01:00:47.220623 | orchestrator | Thursday 05 March 2026 00:56:20 +0000 (0:00:00.674) 0:07:19.988 ******** 2026-03-05 01:00:47.220627 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.220632 | orchestrator | 2026-03-05 01:00:47.220636 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-05 01:00:47.220640 | orchestrator | Thursday 05 March 2026 00:56:21 +0000 (0:00:00.540) 0:07:20.528 ******** 2026-03-05 01:00:47.220644 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220648 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220652 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220656 | orchestrator | 2026-03-05 01:00:47.220660 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-05 01:00:47.220664 | orchestrator | Thursday 05 March 2026 00:56:21 +0000 (0:00:00.629) 0:07:21.158 ******** 2026-03-05 01:00:47.220668 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220672 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220676 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220680 | orchestrator | 2026-03-05 01:00:47.220684 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-05 01:00:47.220688 | orchestrator | Thursday 05 March 2026 00:56:22 +0000 (0:00:00.347) 0:07:21.505 ******** 2026-03-05 01:00:47.220692 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220697 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220701 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220705 | orchestrator | 2026-03-05 01:00:47.220709 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-05 01:00:47.220713 | orchestrator | Thursday 05 March 2026 00:56:22 +0000 (0:00:00.746) 0:07:22.252 ******** 2026-03-05 01:00:47.220717 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.220721 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.220725 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.220729 | orchestrator | 2026-03-05 01:00:47.220733 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-05 01:00:47.220737 | orchestrator | Thursday 05 March 2026 00:56:23 +0000 (0:00:00.364) 0:07:22.617 ******** 2026-03-05 01:00:47.220742 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-05 01:00:47.220746 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-05 01:00:47.220750 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-05 01:00:47.220754 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-05 01:00:47.220758 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-05 01:00:47.220764 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-05 01:00:47.220769 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-05 01:00:47.220773 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-05 01:00:47.220777 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-05 01:00:47.220781 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-05 01:00:47.220785 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-05 01:00:47.220789 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-05 01:00:47.220793 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-05 01:00:47.220797 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-05 01:00:47.220801 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-05 01:00:47.220805 | orchestrator | 2026-03-05 01:00:47.220809 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-05 01:00:47.220815 | orchestrator | Thursday 05 March 2026 00:56:27 +0000 (0:00:04.257) 0:07:26.874 ******** 2026-03-05 01:00:47.220819 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.220823 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.220827 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.220831 | orchestrator | 2026-03-05 01:00:47.220835 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-05 01:00:47.220839 | orchestrator | Thursday 05 March 2026 00:56:27 +0000 (0:00:00.310) 0:07:27.184 ******** 2026-03-05 01:00:47.220843 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.220847 | orchestrator | 2026-03-05 01:00:47.220851 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-05 01:00:47.220855 | orchestrator | Thursday 05 March 2026 00:56:28 +0000 (0:00:00.563) 0:07:27.748 ******** 2026-03-05 01:00:47.220860 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-05 01:00:47.220864 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-05 01:00:47.220868 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-05 01:00:47.220872 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-05 01:00:47.220878 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-05 01:00:47.220882 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-05 01:00:47.220886 | orchestrator | 2026-03-05 01:00:47.220890 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-05 01:00:47.220894 | orchestrator | Thursday 05 March 2026 00:56:29 +0000 (0:00:01.453) 0:07:29.202 ******** 2026-03-05 01:00:47.220898 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.220902 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 01:00:47.220906 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 01:00:47.220910 | orchestrator | 2026-03-05 01:00:47.220914 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-05 01:00:47.220918 | orchestrator | Thursday 05 March 2026 00:56:31 +0000 (0:00:02.056) 0:07:31.258 ******** 2026-03-05 01:00:47.220922 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 01:00:47.220926 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-05 01:00:47.220930 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 01:00:47.220934 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.220938 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 01:00:47.220946 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.220950 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 01:00:47.220954 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-05 01:00:47.220958 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.220962 | orchestrator | 2026-03-05 01:00:47.220966 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-05 01:00:47.220970 | orchestrator | Thursday 05 March 2026 00:56:33 +0000 (0:00:01.288) 0:07:32.547 ******** 2026-03-05 01:00:47.220974 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.220978 | orchestrator | 2026-03-05 01:00:47.220982 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-05 01:00:47.220986 | orchestrator | Thursday 05 March 2026 00:56:35 +0000 (0:00:02.264) 0:07:34.811 ******** 2026-03-05 01:00:47.220990 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.220994 | orchestrator | 2026-03-05 01:00:47.220998 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-05 01:00:47.221002 | orchestrator | Thursday 05 March 2026 00:56:36 +0000 (0:00:00.814) 0:07:35.626 ******** 2026-03-05 01:00:47.221006 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bb27c3c1-5e00-588a-af48-66c3e9a20c72', 'data_vg': 'ceph-bb27c3c1-5e00-588a-af48-66c3e9a20c72'}) 2026-03-05 01:00:47.221011 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-487cf15b-a3c4-55bb-8565-d1e78d85d824', 'data_vg': 'ceph-487cf15b-a3c4-55bb-8565-d1e78d85d824'}) 2026-03-05 01:00:47.221015 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8e61642d-a609-5f4c-883e-a16b698ed397', 'data_vg': 'ceph-8e61642d-a609-5f4c-883e-a16b698ed397'}) 2026-03-05 01:00:47.221019 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-04f48836-d47d-5181-a61a-7e2c62572595', 'data_vg': 'ceph-04f48836-d47d-5181-a61a-7e2c62572595'}) 2026-03-05 01:00:47.221023 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1a9c38f8-c56f-5625-8ade-2e45962405d2', 'data_vg': 'ceph-1a9c38f8-c56f-5625-8ade-2e45962405d2'}) 2026-03-05 01:00:47.221027 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-52eeae7c-0ac3-5716-aafe-40e466221a22', 'data_vg': 'ceph-52eeae7c-0ac3-5716-aafe-40e466221a22'}) 2026-03-05 01:00:47.221031 | orchestrator | 2026-03-05 01:00:47.221035 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-05 01:00:47.221039 | orchestrator | Thursday 05 March 2026 00:57:14 +0000 (0:00:37.970) 0:08:13.596 ******** 2026-03-05 01:00:47.221044 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221048 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221052 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.221056 | orchestrator | 2026-03-05 01:00:47.221060 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-05 01:00:47.221064 | orchestrator | Thursday 05 March 2026 00:57:14 +0000 (0:00:00.350) 0:08:13.947 ******** 2026-03-05 01:00:47.221070 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.221074 | orchestrator | 2026-03-05 01:00:47.221078 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-05 01:00:47.221082 | orchestrator | Thursday 05 March 2026 00:57:15 +0000 (0:00:00.775) 0:08:14.722 ******** 2026-03-05 01:00:47.221086 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.221091 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.221095 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.221099 | orchestrator | 2026-03-05 01:00:47.221103 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-05 01:00:47.221107 | orchestrator | Thursday 05 March 2026 00:57:16 +0000 (0:00:00.696) 0:08:15.419 ******** 2026-03-05 01:00:47.221111 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.221115 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.221122 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.221125 | orchestrator | 2026-03-05 01:00:47.221130 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-05 01:00:47.221160 | orchestrator | Thursday 05 March 2026 00:57:18 +0000 (0:00:02.698) 0:08:18.118 ******** 2026-03-05 01:00:47.221164 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.221168 | orchestrator | 2026-03-05 01:00:47.221175 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-05 01:00:47.221179 | orchestrator | Thursday 05 March 2026 00:57:19 +0000 (0:00:00.766) 0:08:18.885 ******** 2026-03-05 01:00:47.221183 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.221186 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.221190 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.221194 | orchestrator | 2026-03-05 01:00:47.221198 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-05 01:00:47.221202 | orchestrator | Thursday 05 March 2026 00:57:20 +0000 (0:00:01.346) 0:08:20.231 ******** 2026-03-05 01:00:47.221206 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.221210 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.221213 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.221217 | orchestrator | 2026-03-05 01:00:47.221221 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-05 01:00:47.221225 | orchestrator | Thursday 05 March 2026 00:57:22 +0000 (0:00:01.183) 0:08:21.415 ******** 2026-03-05 01:00:47.221229 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.221233 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.221236 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.221240 | orchestrator | 2026-03-05 01:00:47.221244 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-05 01:00:47.221248 | orchestrator | Thursday 05 March 2026 00:57:24 +0000 (0:00:02.032) 0:08:23.448 ******** 2026-03-05 01:00:47.221252 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221255 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221259 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.221270 | orchestrator | 2026-03-05 01:00:47.221274 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-05 01:00:47.221283 | orchestrator | Thursday 05 March 2026 00:57:24 +0000 (0:00:00.647) 0:08:24.095 ******** 2026-03-05 01:00:47.221287 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221291 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221295 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.221299 | orchestrator | 2026-03-05 01:00:47.221303 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-05 01:00:47.221307 | orchestrator | Thursday 05 March 2026 00:57:25 +0000 (0:00:00.333) 0:08:24.429 ******** 2026-03-05 01:00:47.221310 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-05 01:00:47.221314 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-05 01:00:47.221318 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-05 01:00:47.221322 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-05 01:00:47.221325 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-05 01:00:47.221329 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-05 01:00:47.221333 | orchestrator | 2026-03-05 01:00:47.221337 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-05 01:00:47.221341 | orchestrator | Thursday 05 March 2026 00:57:26 +0000 (0:00:01.154) 0:08:25.584 ******** 2026-03-05 01:00:47.221345 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-05 01:00:47.221348 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-05 01:00:47.221352 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-05 01:00:47.221356 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-05 01:00:47.221360 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-05 01:00:47.221364 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-05 01:00:47.221372 | orchestrator | 2026-03-05 01:00:47.221376 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-05 01:00:47.221380 | orchestrator | Thursday 05 March 2026 00:57:28 +0000 (0:00:02.263) 0:08:27.847 ******** 2026-03-05 01:00:47.221384 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-05 01:00:47.221388 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-05 01:00:47.221392 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-05 01:00:47.221395 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-05 01:00:47.221399 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-05 01:00:47.221403 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-05 01:00:47.221407 | orchestrator | 2026-03-05 01:00:47.221411 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-05 01:00:47.221415 | orchestrator | Thursday 05 March 2026 00:57:32 +0000 (0:00:04.352) 0:08:32.200 ******** 2026-03-05 01:00:47.221419 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221423 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221426 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.221430 | orchestrator | 2026-03-05 01:00:47.221434 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-05 01:00:47.221438 | orchestrator | Thursday 05 March 2026 00:57:36 +0000 (0:00:03.293) 0:08:35.493 ******** 2026-03-05 01:00:47.221442 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221449 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221453 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-05 01:00:47.221457 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.221461 | orchestrator | 2026-03-05 01:00:47.221464 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-05 01:00:47.221468 | orchestrator | Thursday 05 March 2026 00:57:48 +0000 (0:00:12.552) 0:08:48.046 ******** 2026-03-05 01:00:47.221472 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221476 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221480 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.221484 | orchestrator | 2026-03-05 01:00:47.221487 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 01:00:47.221491 | orchestrator | Thursday 05 March 2026 00:57:49 +0000 (0:00:01.160) 0:08:49.207 ******** 2026-03-05 01:00:47.221495 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221499 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221503 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.221508 | orchestrator | 2026-03-05 01:00:47.221514 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-05 01:00:47.221523 | orchestrator | Thursday 05 March 2026 00:57:50 +0000 (0:00:00.385) 0:08:49.593 ******** 2026-03-05 01:00:47.221529 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.221538 | orchestrator | 2026-03-05 01:00:47.221548 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-05 01:00:47.221553 | orchestrator | Thursday 05 March 2026 00:57:51 +0000 (0:00:01.027) 0:08:50.620 ******** 2026-03-05 01:00:47.221559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.221566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.221572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.221578 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221584 | orchestrator | 2026-03-05 01:00:47.221590 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-05 01:00:47.221595 | orchestrator | Thursday 05 March 2026 00:57:51 +0000 (0:00:00.452) 0:08:51.073 ******** 2026-03-05 01:00:47.221601 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221608 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221620 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.221627 | orchestrator | 2026-03-05 01:00:47.221633 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-05 01:00:47.221639 | orchestrator | Thursday 05 March 2026 00:57:52 +0000 (0:00:00.349) 0:08:51.422 ******** 2026-03-05 01:00:47.221645 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221649 | orchestrator | 2026-03-05 01:00:47.221652 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-05 01:00:47.221656 | orchestrator | Thursday 05 March 2026 00:57:52 +0000 (0:00:00.249) 0:08:51.672 ******** 2026-03-05 01:00:47.221660 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221664 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221668 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.221672 | orchestrator | 2026-03-05 01:00:47.221675 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-05 01:00:47.221679 | orchestrator | Thursday 05 March 2026 00:57:52 +0000 (0:00:00.366) 0:08:52.038 ******** 2026-03-05 01:00:47.221683 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221687 | orchestrator | 2026-03-05 01:00:47.221691 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-05 01:00:47.221695 | orchestrator | Thursday 05 March 2026 00:57:52 +0000 (0:00:00.238) 0:08:52.277 ******** 2026-03-05 01:00:47.221699 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221702 | orchestrator | 2026-03-05 01:00:47.221706 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-05 01:00:47.221710 | orchestrator | Thursday 05 March 2026 00:57:53 +0000 (0:00:00.289) 0:08:52.567 ******** 2026-03-05 01:00:47.221714 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221718 | orchestrator | 2026-03-05 01:00:47.221722 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-05 01:00:47.221725 | orchestrator | Thursday 05 March 2026 00:57:53 +0000 (0:00:00.179) 0:08:52.746 ******** 2026-03-05 01:00:47.221729 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221733 | orchestrator | 2026-03-05 01:00:47.221737 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-05 01:00:47.221741 | orchestrator | Thursday 05 March 2026 00:57:54 +0000 (0:00:00.879) 0:08:53.626 ******** 2026-03-05 01:00:47.221745 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221748 | orchestrator | 2026-03-05 01:00:47.221752 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-05 01:00:47.221756 | orchestrator | Thursday 05 March 2026 00:57:54 +0000 (0:00:00.234) 0:08:53.860 ******** 2026-03-05 01:00:47.221760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.221764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.221767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.221771 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221775 | orchestrator | 2026-03-05 01:00:47.221779 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-05 01:00:47.221783 | orchestrator | Thursday 05 March 2026 00:57:54 +0000 (0:00:00.412) 0:08:54.272 ******** 2026-03-05 01:00:47.221787 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221791 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221794 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.221798 | orchestrator | 2026-03-05 01:00:47.221802 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-05 01:00:47.221806 | orchestrator | Thursday 05 March 2026 00:57:55 +0000 (0:00:00.343) 0:08:54.616 ******** 2026-03-05 01:00:47.221810 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221814 | orchestrator | 2026-03-05 01:00:47.221821 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-05 01:00:47.221825 | orchestrator | Thursday 05 March 2026 00:57:55 +0000 (0:00:00.263) 0:08:54.879 ******** 2026-03-05 01:00:47.221829 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221838 | orchestrator | 2026-03-05 01:00:47.221842 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-05 01:00:47.221846 | orchestrator | 2026-03-05 01:00:47.221849 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 01:00:47.221853 | orchestrator | Thursday 05 March 2026 00:57:56 +0000 (0:00:00.998) 0:08:55.878 ******** 2026-03-05 01:00:47.221857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.221863 | orchestrator | 2026-03-05 01:00:47.221867 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 01:00:47.221871 | orchestrator | Thursday 05 March 2026 00:57:57 +0000 (0:00:01.330) 0:08:57.209 ******** 2026-03-05 01:00:47.221881 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.221887 | orchestrator | 2026-03-05 01:00:47.221893 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 01:00:47.221900 | orchestrator | Thursday 05 March 2026 00:57:59 +0000 (0:00:01.365) 0:08:58.574 ******** 2026-03-05 01:00:47.221905 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.221911 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.221916 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.221923 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.221928 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.221934 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.221940 | orchestrator | 2026-03-05 01:00:47.221946 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 01:00:47.221952 | orchestrator | Thursday 05 March 2026 00:58:00 +0000 (0:00:01.077) 0:08:59.652 ******** 2026-03-05 01:00:47.221958 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.221964 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.221971 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.221975 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.221979 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.221983 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.221987 | orchestrator | 2026-03-05 01:00:47.221991 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 01:00:47.221994 | orchestrator | Thursday 05 March 2026 00:58:01 +0000 (0:00:00.738) 0:09:00.391 ******** 2026-03-05 01:00:47.221998 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222002 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222006 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222010 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222098 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222104 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222107 | orchestrator | 2026-03-05 01:00:47.222112 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 01:00:47.222115 | orchestrator | Thursday 05 March 2026 00:58:02 +0000 (0:00:01.072) 0:09:01.463 ******** 2026-03-05 01:00:47.222119 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222123 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222127 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222131 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222149 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222155 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222161 | orchestrator | 2026-03-05 01:00:47.222166 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 01:00:47.222172 | orchestrator | Thursday 05 March 2026 00:58:02 +0000 (0:00:00.653) 0:09:02.116 ******** 2026-03-05 01:00:47.222177 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.222184 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.222188 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.222198 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.222202 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.222205 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.222209 | orchestrator | 2026-03-05 01:00:47.222213 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 01:00:47.222217 | orchestrator | Thursday 05 March 2026 00:58:04 +0000 (0:00:01.334) 0:09:03.450 ******** 2026-03-05 01:00:47.222221 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.222225 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.222228 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.222232 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222236 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222240 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222243 | orchestrator | 2026-03-05 01:00:47.222247 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 01:00:47.222251 | orchestrator | Thursday 05 March 2026 00:58:04 +0000 (0:00:00.729) 0:09:04.180 ******** 2026-03-05 01:00:47.222255 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.222259 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.222262 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.222266 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222270 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222274 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222277 | orchestrator | 2026-03-05 01:00:47.222281 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 01:00:47.222285 | orchestrator | Thursday 05 March 2026 00:58:05 +0000 (0:00:00.871) 0:09:05.052 ******** 2026-03-05 01:00:47.222289 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222293 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222297 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222301 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.222304 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.222308 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.222312 | orchestrator | 2026-03-05 01:00:47.222316 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 01:00:47.222320 | orchestrator | Thursday 05 March 2026 00:58:06 +0000 (0:00:01.209) 0:09:06.261 ******** 2026-03-05 01:00:47.222327 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222331 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222335 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222339 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.222342 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.222346 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.222350 | orchestrator | 2026-03-05 01:00:47.222354 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 01:00:47.222358 | orchestrator | Thursday 05 March 2026 00:58:08 +0000 (0:00:01.529) 0:09:07.791 ******** 2026-03-05 01:00:47.222361 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.222365 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.222369 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.222373 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222377 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222380 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222384 | orchestrator | 2026-03-05 01:00:47.222388 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 01:00:47.222392 | orchestrator | Thursday 05 March 2026 00:58:09 +0000 (0:00:00.645) 0:09:08.436 ******** 2026-03-05 01:00:47.222396 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.222400 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.222407 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.222411 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.222415 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.222419 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.222423 | orchestrator | 2026-03-05 01:00:47.222427 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 01:00:47.222434 | orchestrator | Thursday 05 March 2026 00:58:09 +0000 (0:00:00.859) 0:09:09.295 ******** 2026-03-05 01:00:47.222438 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222441 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222445 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222449 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222453 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222457 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222461 | orchestrator | 2026-03-05 01:00:47.222464 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 01:00:47.222468 | orchestrator | Thursday 05 March 2026 00:58:10 +0000 (0:00:00.609) 0:09:09.905 ******** 2026-03-05 01:00:47.222472 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222476 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222480 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222483 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222487 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222491 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222495 | orchestrator | 2026-03-05 01:00:47.222499 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 01:00:47.222503 | orchestrator | Thursday 05 March 2026 00:58:11 +0000 (0:00:00.924) 0:09:10.830 ******** 2026-03-05 01:00:47.222507 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222510 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222514 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222518 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222522 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222526 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222530 | orchestrator | 2026-03-05 01:00:47.222533 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 01:00:47.222537 | orchestrator | Thursday 05 March 2026 00:58:12 +0000 (0:00:00.687) 0:09:11.517 ******** 2026-03-05 01:00:47.222541 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.222545 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.222549 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.222553 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222556 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222560 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222564 | orchestrator | 2026-03-05 01:00:47.222568 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 01:00:47.222572 | orchestrator | Thursday 05 March 2026 00:58:13 +0000 (0:00:00.888) 0:09:12.405 ******** 2026-03-05 01:00:47.222576 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.222580 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.222583 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.222587 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:00:47.222591 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:00:47.222595 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:00:47.222599 | orchestrator | 2026-03-05 01:00:47.222603 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 01:00:47.222607 | orchestrator | Thursday 05 March 2026 00:58:13 +0000 (0:00:00.626) 0:09:13.032 ******** 2026-03-05 01:00:47.222611 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.222614 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.222618 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.222622 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.222626 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.222630 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.222634 | orchestrator | 2026-03-05 01:00:47.222637 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 01:00:47.222641 | orchestrator | Thursday 05 March 2026 00:58:14 +0000 (0:00:00.950) 0:09:13.983 ******** 2026-03-05 01:00:47.222645 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222649 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222658 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222661 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.222665 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.222669 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.222673 | orchestrator | 2026-03-05 01:00:47.222677 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 01:00:47.222680 | orchestrator | Thursday 05 March 2026 00:58:15 +0000 (0:00:00.646) 0:09:14.630 ******** 2026-03-05 01:00:47.222684 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222688 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222692 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222696 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.222700 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.222703 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.222707 | orchestrator | 2026-03-05 01:00:47.222711 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-05 01:00:47.222715 | orchestrator | Thursday 05 March 2026 00:58:16 +0000 (0:00:01.367) 0:09:15.997 ******** 2026-03-05 01:00:47.222722 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.222726 | orchestrator | 2026-03-05 01:00:47.222730 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-05 01:00:47.222733 | orchestrator | Thursday 05 March 2026 00:58:20 +0000 (0:00:03.969) 0:09:19.966 ******** 2026-03-05 01:00:47.222737 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.222741 | orchestrator | 2026-03-05 01:00:47.222745 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-05 01:00:47.222749 | orchestrator | Thursday 05 March 2026 00:58:22 +0000 (0:00:02.065) 0:09:22.032 ******** 2026-03-05 01:00:47.222753 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.222757 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.222760 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.222764 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.222768 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.222772 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.222776 | orchestrator | 2026-03-05 01:00:47.222780 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-05 01:00:47.222783 | orchestrator | Thursday 05 March 2026 00:58:24 +0000 (0:00:01.970) 0:09:24.002 ******** 2026-03-05 01:00:47.222790 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.222795 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.222798 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.222802 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.222806 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.222810 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.222813 | orchestrator | 2026-03-05 01:00:47.222817 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-05 01:00:47.222821 | orchestrator | Thursday 05 March 2026 00:58:25 +0000 (0:00:01.028) 0:09:25.031 ******** 2026-03-05 01:00:47.222825 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.222830 | orchestrator | 2026-03-05 01:00:47.222834 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-05 01:00:47.222838 | orchestrator | Thursday 05 March 2026 00:58:27 +0000 (0:00:01.383) 0:09:26.414 ******** 2026-03-05 01:00:47.222842 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.222846 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.222849 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.222853 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.222857 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.222861 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.222864 | orchestrator | 2026-03-05 01:00:47.222868 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-05 01:00:47.222879 | orchestrator | Thursday 05 March 2026 00:58:28 +0000 (0:00:01.863) 0:09:28.277 ******** 2026-03-05 01:00:47.222883 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.222887 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.222890 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.222894 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.222898 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.222902 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.222905 | orchestrator | 2026-03-05 01:00:47.222909 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-05 01:00:47.222913 | orchestrator | Thursday 05 March 2026 00:58:32 +0000 (0:00:03.936) 0:09:32.213 ******** 2026-03-05 01:00:47.222917 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:00:47.222921 | orchestrator | 2026-03-05 01:00:47.222925 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-05 01:00:47.222929 | orchestrator | Thursday 05 March 2026 00:58:34 +0000 (0:00:01.436) 0:09:33.649 ******** 2026-03-05 01:00:47.222933 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.222937 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.222941 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.222945 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.222948 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.222952 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.222956 | orchestrator | 2026-03-05 01:00:47.222960 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-05 01:00:47.222964 | orchestrator | Thursday 05 March 2026 00:58:35 +0000 (0:00:00.892) 0:09:34.542 ******** 2026-03-05 01:00:47.222968 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.222972 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.222975 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.222979 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:00:47.222983 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:00:47.222987 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:00:47.222991 | orchestrator | 2026-03-05 01:00:47.222995 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-05 01:00:47.222999 | orchestrator | Thursday 05 March 2026 00:58:37 +0000 (0:00:02.597) 0:09:37.140 ******** 2026-03-05 01:00:47.223002 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223006 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223010 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223014 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:00:47.223018 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:00:47.223022 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:00:47.223026 | orchestrator | 2026-03-05 01:00:47.223030 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-05 01:00:47.223034 | orchestrator | 2026-03-05 01:00:47.223037 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 01:00:47.223041 | orchestrator | Thursday 05 March 2026 00:58:39 +0000 (0:00:01.245) 0:09:38.385 ******** 2026-03-05 01:00:47.223045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.223049 | orchestrator | 2026-03-05 01:00:47.223053 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 01:00:47.223060 | orchestrator | Thursday 05 March 2026 00:58:39 +0000 (0:00:00.517) 0:09:38.903 ******** 2026-03-05 01:00:47.223064 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.223068 | orchestrator | 2026-03-05 01:00:47.223072 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 01:00:47.223076 | orchestrator | Thursday 05 March 2026 00:58:40 +0000 (0:00:00.804) 0:09:39.708 ******** 2026-03-05 01:00:47.223080 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223087 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223091 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223095 | orchestrator | 2026-03-05 01:00:47.223098 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 01:00:47.223102 | orchestrator | Thursday 05 March 2026 00:58:40 +0000 (0:00:00.317) 0:09:40.026 ******** 2026-03-05 01:00:47.223106 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223110 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223114 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223118 | orchestrator | 2026-03-05 01:00:47.223122 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 01:00:47.223129 | orchestrator | Thursday 05 March 2026 00:58:41 +0000 (0:00:00.728) 0:09:40.754 ******** 2026-03-05 01:00:47.223144 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223148 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223152 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223156 | orchestrator | 2026-03-05 01:00:47.223160 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 01:00:47.223164 | orchestrator | Thursday 05 March 2026 00:58:42 +0000 (0:00:01.146) 0:09:41.901 ******** 2026-03-05 01:00:47.223167 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223171 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223175 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223179 | orchestrator | 2026-03-05 01:00:47.223183 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 01:00:47.223187 | orchestrator | Thursday 05 March 2026 00:58:43 +0000 (0:00:00.855) 0:09:42.757 ******** 2026-03-05 01:00:47.223190 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223194 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223198 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223202 | orchestrator | 2026-03-05 01:00:47.223206 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 01:00:47.223210 | orchestrator | Thursday 05 March 2026 00:58:43 +0000 (0:00:00.362) 0:09:43.120 ******** 2026-03-05 01:00:47.223214 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223218 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223221 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223225 | orchestrator | 2026-03-05 01:00:47.223229 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 01:00:47.223233 | orchestrator | Thursday 05 March 2026 00:58:44 +0000 (0:00:00.326) 0:09:43.446 ******** 2026-03-05 01:00:47.223237 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223241 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223244 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223248 | orchestrator | 2026-03-05 01:00:47.223252 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 01:00:47.223256 | orchestrator | Thursday 05 March 2026 00:58:44 +0000 (0:00:00.620) 0:09:44.067 ******** 2026-03-05 01:00:47.223260 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223264 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223267 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223271 | orchestrator | 2026-03-05 01:00:47.223275 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 01:00:47.223279 | orchestrator | Thursday 05 March 2026 00:58:45 +0000 (0:00:00.799) 0:09:44.866 ******** 2026-03-05 01:00:47.223283 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223287 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223291 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223294 | orchestrator | 2026-03-05 01:00:47.223298 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 01:00:47.223302 | orchestrator | Thursday 05 March 2026 00:58:46 +0000 (0:00:00.802) 0:09:45.669 ******** 2026-03-05 01:00:47.223306 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223310 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223314 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223321 | orchestrator | 2026-03-05 01:00:47.223325 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 01:00:47.223328 | orchestrator | Thursday 05 March 2026 00:58:46 +0000 (0:00:00.344) 0:09:46.013 ******** 2026-03-05 01:00:47.223332 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223336 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223340 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223344 | orchestrator | 2026-03-05 01:00:47.223347 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 01:00:47.223351 | orchestrator | Thursday 05 March 2026 00:58:47 +0000 (0:00:00.639) 0:09:46.653 ******** 2026-03-05 01:00:47.223355 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223359 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223363 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223367 | orchestrator | 2026-03-05 01:00:47.223370 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 01:00:47.223374 | orchestrator | Thursday 05 March 2026 00:58:47 +0000 (0:00:00.355) 0:09:47.009 ******** 2026-03-05 01:00:47.223378 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223382 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223386 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223389 | orchestrator | 2026-03-05 01:00:47.223393 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 01:00:47.223397 | orchestrator | Thursday 05 March 2026 00:58:48 +0000 (0:00:00.433) 0:09:47.443 ******** 2026-03-05 01:00:47.223401 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223405 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223409 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223412 | orchestrator | 2026-03-05 01:00:47.223416 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 01:00:47.223420 | orchestrator | Thursday 05 March 2026 00:58:48 +0000 (0:00:00.484) 0:09:47.927 ******** 2026-03-05 01:00:47.223427 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223431 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223434 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223438 | orchestrator | 2026-03-05 01:00:47.223442 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 01:00:47.223446 | orchestrator | Thursday 05 March 2026 00:58:49 +0000 (0:00:00.640) 0:09:48.567 ******** 2026-03-05 01:00:47.223450 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223454 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223457 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223461 | orchestrator | 2026-03-05 01:00:47.223465 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 01:00:47.223469 | orchestrator | Thursday 05 March 2026 00:58:49 +0000 (0:00:00.344) 0:09:48.911 ******** 2026-03-05 01:00:47.223473 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223477 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223481 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223484 | orchestrator | 2026-03-05 01:00:47.223488 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 01:00:47.223492 | orchestrator | Thursday 05 March 2026 00:58:49 +0000 (0:00:00.395) 0:09:49.306 ******** 2026-03-05 01:00:47.223496 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223502 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223506 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223510 | orchestrator | 2026-03-05 01:00:47.223514 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 01:00:47.223518 | orchestrator | Thursday 05 March 2026 00:58:50 +0000 (0:00:00.529) 0:09:49.836 ******** 2026-03-05 01:00:47.223522 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223526 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223529 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223533 | orchestrator | 2026-03-05 01:00:47.223537 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-05 01:00:47.223546 | orchestrator | Thursday 05 March 2026 00:58:51 +0000 (0:00:00.946) 0:09:50.783 ******** 2026-03-05 01:00:47.223550 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223554 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223558 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-05 01:00:47.223562 | orchestrator | 2026-03-05 01:00:47.223566 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-05 01:00:47.223569 | orchestrator | Thursday 05 March 2026 00:58:51 +0000 (0:00:00.545) 0:09:51.329 ******** 2026-03-05 01:00:47.223573 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.223577 | orchestrator | 2026-03-05 01:00:47.223581 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-05 01:00:47.223585 | orchestrator | Thursday 05 March 2026 00:58:54 +0000 (0:00:02.230) 0:09:53.560 ******** 2026-03-05 01:00:47.223591 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-05 01:00:47.223596 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223600 | orchestrator | 2026-03-05 01:00:47.223604 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-05 01:00:47.223608 | orchestrator | Thursday 05 March 2026 00:58:54 +0000 (0:00:00.342) 0:09:53.902 ******** 2026-03-05 01:00:47.223613 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:00:47.223623 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:00:47.223627 | orchestrator | 2026-03-05 01:00:47.223631 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-05 01:00:47.223635 | orchestrator | Thursday 05 March 2026 00:59:03 +0000 (0:00:08.869) 0:10:02.771 ******** 2026-03-05 01:00:47.223638 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:00:47.223642 | orchestrator | 2026-03-05 01:00:47.223646 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-05 01:00:47.223650 | orchestrator | Thursday 05 March 2026 00:59:07 +0000 (0:00:04.057) 0:10:06.829 ******** 2026-03-05 01:00:47.223654 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.223658 | orchestrator | 2026-03-05 01:00:47.223662 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-05 01:00:47.223666 | orchestrator | Thursday 05 March 2026 00:59:08 +0000 (0:00:00.570) 0:10:07.399 ******** 2026-03-05 01:00:47.223670 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-05 01:00:47.223674 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-05 01:00:47.223677 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-05 01:00:47.223681 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-05 01:00:47.223685 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-05 01:00:47.223689 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-05 01:00:47.223692 | orchestrator | 2026-03-05 01:00:47.223700 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-05 01:00:47.223703 | orchestrator | Thursday 05 March 2026 00:59:09 +0000 (0:00:01.085) 0:10:08.485 ******** 2026-03-05 01:00:47.223711 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.223715 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 01:00:47.223718 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 01:00:47.223722 | orchestrator | 2026-03-05 01:00:47.223726 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-05 01:00:47.223730 | orchestrator | Thursday 05 March 2026 00:59:11 +0000 (0:00:02.614) 0:10:11.099 ******** 2026-03-05 01:00:47.223734 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 01:00:47.223738 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 01:00:47.223742 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.223746 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 01:00:47.223750 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 01:00:47.223753 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-05 01:00:47.223757 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-05 01:00:47.223763 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.223767 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.223771 | orchestrator | 2026-03-05 01:00:47.223774 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-05 01:00:47.223778 | orchestrator | Thursday 05 March 2026 00:59:13 +0000 (0:00:01.522) 0:10:12.622 ******** 2026-03-05 01:00:47.223782 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.223786 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.223790 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.223794 | orchestrator | 2026-03-05 01:00:47.223797 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-05 01:00:47.223801 | orchestrator | Thursday 05 March 2026 00:59:15 +0000 (0:00:02.686) 0:10:15.308 ******** 2026-03-05 01:00:47.223805 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.223809 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.223813 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.223816 | orchestrator | 2026-03-05 01:00:47.223820 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-05 01:00:47.223824 | orchestrator | Thursday 05 March 2026 00:59:16 +0000 (0:00:00.314) 0:10:15.623 ******** 2026-03-05 01:00:47.223828 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.223832 | orchestrator | 2026-03-05 01:00:47.223836 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-05 01:00:47.223839 | orchestrator | Thursday 05 March 2026 00:59:17 +0000 (0:00:00.814) 0:10:16.438 ******** 2026-03-05 01:00:47.223843 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.223847 | orchestrator | 2026-03-05 01:00:47.223851 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-05 01:00:47.223855 | orchestrator | Thursday 05 March 2026 00:59:17 +0000 (0:00:00.682) 0:10:17.120 ******** 2026-03-05 01:00:47.223858 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.223862 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.223866 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.223870 | orchestrator | 2026-03-05 01:00:47.223874 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-05 01:00:47.223877 | orchestrator | Thursday 05 March 2026 00:59:19 +0000 (0:00:01.294) 0:10:18.415 ******** 2026-03-05 01:00:47.223881 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.223885 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.223889 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.223893 | orchestrator | 2026-03-05 01:00:47.223896 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-05 01:00:47.223900 | orchestrator | Thursday 05 March 2026 00:59:20 +0000 (0:00:01.496) 0:10:19.911 ******** 2026-03-05 01:00:47.223904 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.223911 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.223915 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.223919 | orchestrator | 2026-03-05 01:00:47.223923 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-05 01:00:47.223927 | orchestrator | Thursday 05 March 2026 00:59:22 +0000 (0:00:01.811) 0:10:21.723 ******** 2026-03-05 01:00:47.223930 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.223934 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.223938 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.223942 | orchestrator | 2026-03-05 01:00:47.223946 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-05 01:00:47.223949 | orchestrator | Thursday 05 March 2026 00:59:24 +0000 (0:00:02.083) 0:10:23.807 ******** 2026-03-05 01:00:47.223953 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.223957 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.223961 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.223965 | orchestrator | 2026-03-05 01:00:47.223969 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 01:00:47.223972 | orchestrator | Thursday 05 March 2026 00:59:26 +0000 (0:00:01.584) 0:10:25.392 ******** 2026-03-05 01:00:47.223976 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.223980 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.223984 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.223988 | orchestrator | 2026-03-05 01:00:47.223991 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-05 01:00:47.223995 | orchestrator | Thursday 05 March 2026 00:59:26 +0000 (0:00:00.761) 0:10:26.153 ******** 2026-03-05 01:00:47.223999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.224003 | orchestrator | 2026-03-05 01:00:47.224007 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-05 01:00:47.224011 | orchestrator | Thursday 05 March 2026 00:59:27 +0000 (0:00:00.799) 0:10:26.953 ******** 2026-03-05 01:00:47.224017 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224021 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224025 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224028 | orchestrator | 2026-03-05 01:00:47.224032 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-05 01:00:47.224036 | orchestrator | Thursday 05 March 2026 00:59:27 +0000 (0:00:00.344) 0:10:27.297 ******** 2026-03-05 01:00:47.224040 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.224044 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.224047 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.224051 | orchestrator | 2026-03-05 01:00:47.224055 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-05 01:00:47.224059 | orchestrator | Thursday 05 March 2026 00:59:29 +0000 (0:00:01.358) 0:10:28.655 ******** 2026-03-05 01:00:47.224063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.224066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.224070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.224074 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224078 | orchestrator | 2026-03-05 01:00:47.224082 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-05 01:00:47.224087 | orchestrator | Thursday 05 March 2026 00:59:30 +0000 (0:00:00.956) 0:10:29.612 ******** 2026-03-05 01:00:47.224091 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224095 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224099 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224103 | orchestrator | 2026-03-05 01:00:47.224107 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-05 01:00:47.224111 | orchestrator | 2026-03-05 01:00:47.224115 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-05 01:00:47.224118 | orchestrator | Thursday 05 March 2026 00:59:31 +0000 (0:00:00.908) 0:10:30.521 ******** 2026-03-05 01:00:47.224125 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.224129 | orchestrator | 2026-03-05 01:00:47.224170 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-05 01:00:47.224175 | orchestrator | Thursday 05 March 2026 00:59:31 +0000 (0:00:00.530) 0:10:31.051 ******** 2026-03-05 01:00:47.224178 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.224182 | orchestrator | 2026-03-05 01:00:47.224186 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-05 01:00:47.224190 | orchestrator | Thursday 05 March 2026 00:59:32 +0000 (0:00:00.833) 0:10:31.885 ******** 2026-03-05 01:00:47.224194 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224198 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224202 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224205 | orchestrator | 2026-03-05 01:00:47.224209 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-05 01:00:47.224213 | orchestrator | Thursday 05 March 2026 00:59:32 +0000 (0:00:00.367) 0:10:32.252 ******** 2026-03-05 01:00:47.224217 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224221 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224225 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224228 | orchestrator | 2026-03-05 01:00:47.224232 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-05 01:00:47.224236 | orchestrator | Thursday 05 March 2026 00:59:33 +0000 (0:00:00.732) 0:10:32.985 ******** 2026-03-05 01:00:47.224240 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224244 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224248 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224252 | orchestrator | 2026-03-05 01:00:47.224256 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-05 01:00:47.224259 | orchestrator | Thursday 05 March 2026 00:59:34 +0000 (0:00:01.025) 0:10:34.010 ******** 2026-03-05 01:00:47.224263 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224267 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224271 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224275 | orchestrator | 2026-03-05 01:00:47.224278 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-05 01:00:47.224282 | orchestrator | Thursday 05 March 2026 00:59:35 +0000 (0:00:00.783) 0:10:34.793 ******** 2026-03-05 01:00:47.224286 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224290 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224294 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224298 | orchestrator | 2026-03-05 01:00:47.224301 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-05 01:00:47.224305 | orchestrator | Thursday 05 March 2026 00:59:35 +0000 (0:00:00.316) 0:10:35.109 ******** 2026-03-05 01:00:47.224309 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224313 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224317 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224320 | orchestrator | 2026-03-05 01:00:47.224324 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-05 01:00:47.224328 | orchestrator | Thursday 05 March 2026 00:59:36 +0000 (0:00:00.340) 0:10:35.450 ******** 2026-03-05 01:00:47.224332 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224336 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224340 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224343 | orchestrator | 2026-03-05 01:00:47.224347 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-05 01:00:47.224351 | orchestrator | Thursday 05 March 2026 00:59:36 +0000 (0:00:00.625) 0:10:36.076 ******** 2026-03-05 01:00:47.224355 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224362 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224367 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224373 | orchestrator | 2026-03-05 01:00:47.224380 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-05 01:00:47.224388 | orchestrator | Thursday 05 March 2026 00:59:37 +0000 (0:00:00.739) 0:10:36.816 ******** 2026-03-05 01:00:47.224394 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224400 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224406 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224413 | orchestrator | 2026-03-05 01:00:47.224423 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-05 01:00:47.224429 | orchestrator | Thursday 05 March 2026 00:59:38 +0000 (0:00:00.758) 0:10:37.574 ******** 2026-03-05 01:00:47.224434 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224440 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224445 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224451 | orchestrator | 2026-03-05 01:00:47.224458 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-05 01:00:47.224464 | orchestrator | Thursday 05 March 2026 00:59:38 +0000 (0:00:00.315) 0:10:37.889 ******** 2026-03-05 01:00:47.224471 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224477 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224483 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224489 | orchestrator | 2026-03-05 01:00:47.224496 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-05 01:00:47.224502 | orchestrator | Thursday 05 March 2026 00:59:39 +0000 (0:00:00.625) 0:10:38.514 ******** 2026-03-05 01:00:47.224509 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224515 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224523 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224527 | orchestrator | 2026-03-05 01:00:47.224534 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-05 01:00:47.224538 | orchestrator | Thursday 05 March 2026 00:59:39 +0000 (0:00:00.421) 0:10:38.936 ******** 2026-03-05 01:00:47.224542 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224546 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224549 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224553 | orchestrator | 2026-03-05 01:00:47.224557 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-05 01:00:47.224561 | orchestrator | Thursday 05 March 2026 00:59:40 +0000 (0:00:00.459) 0:10:39.395 ******** 2026-03-05 01:00:47.224565 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224568 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224572 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224576 | orchestrator | 2026-03-05 01:00:47.224580 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-05 01:00:47.224584 | orchestrator | Thursday 05 March 2026 00:59:40 +0000 (0:00:00.345) 0:10:39.740 ******** 2026-03-05 01:00:47.224588 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224591 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224595 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224599 | orchestrator | 2026-03-05 01:00:47.224603 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-05 01:00:47.224607 | orchestrator | Thursday 05 March 2026 00:59:41 +0000 (0:00:00.665) 0:10:40.406 ******** 2026-03-05 01:00:47.224610 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224614 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224618 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224622 | orchestrator | 2026-03-05 01:00:47.224626 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-05 01:00:47.224630 | orchestrator | Thursday 05 March 2026 00:59:41 +0000 (0:00:00.329) 0:10:40.736 ******** 2026-03-05 01:00:47.224633 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224637 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224641 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224649 | orchestrator | 2026-03-05 01:00:47.224653 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-05 01:00:47.224657 | orchestrator | Thursday 05 March 2026 00:59:41 +0000 (0:00:00.316) 0:10:41.052 ******** 2026-03-05 01:00:47.224661 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224665 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224668 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224672 | orchestrator | 2026-03-05 01:00:47.224676 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-05 01:00:47.224680 | orchestrator | Thursday 05 March 2026 00:59:42 +0000 (0:00:00.360) 0:10:41.413 ******** 2026-03-05 01:00:47.224684 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.224687 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.224691 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.224695 | orchestrator | 2026-03-05 01:00:47.224699 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-05 01:00:47.224703 | orchestrator | Thursday 05 March 2026 00:59:42 +0000 (0:00:00.829) 0:10:42.242 ******** 2026-03-05 01:00:47.224707 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.224710 | orchestrator | 2026-03-05 01:00:47.224714 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-05 01:00:47.224718 | orchestrator | Thursday 05 March 2026 00:59:43 +0000 (0:00:00.582) 0:10:42.825 ******** 2026-03-05 01:00:47.224722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.224726 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 01:00:47.224729 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 01:00:47.224733 | orchestrator | 2026-03-05 01:00:47.224737 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-05 01:00:47.224741 | orchestrator | Thursday 05 March 2026 00:59:45 +0000 (0:00:02.342) 0:10:45.168 ******** 2026-03-05 01:00:47.224745 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 01:00:47.224748 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-05 01:00:47.224752 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.224756 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 01:00:47.224760 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-05 01:00:47.224764 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.224768 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 01:00:47.224771 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-05 01:00:47.224775 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.224779 | orchestrator | 2026-03-05 01:00:47.224783 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-05 01:00:47.224787 | orchestrator | Thursday 05 March 2026 00:59:47 +0000 (0:00:01.597) 0:10:46.766 ******** 2026-03-05 01:00:47.224793 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.224797 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.224801 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.224805 | orchestrator | 2026-03-05 01:00:47.224809 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-05 01:00:47.224812 | orchestrator | Thursday 05 March 2026 00:59:47 +0000 (0:00:00.343) 0:10:47.110 ******** 2026-03-05 01:00:47.224816 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.224820 | orchestrator | 2026-03-05 01:00:47.224824 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-05 01:00:47.224828 | orchestrator | Thursday 05 March 2026 00:59:48 +0000 (0:00:00.584) 0:10:47.694 ******** 2026-03-05 01:00:47.224832 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.224838 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.224846 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.224850 | orchestrator | 2026-03-05 01:00:47.224854 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-05 01:00:47.224858 | orchestrator | Thursday 05 March 2026 00:59:49 +0000 (0:00:01.354) 0:10:49.049 ******** 2026-03-05 01:00:47.224862 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.224865 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-05 01:00:47.224869 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.224873 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.224877 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-05 01:00:47.224881 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-05 01:00:47.224885 | orchestrator | 2026-03-05 01:00:47.224889 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-05 01:00:47.224892 | orchestrator | Thursday 05 March 2026 00:59:54 +0000 (0:00:04.543) 0:10:53.592 ******** 2026-03-05 01:00:47.224896 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.224900 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 01:00:47.224904 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.224908 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 01:00:47.224911 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:00:47.224915 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 01:00:47.224919 | orchestrator | 2026-03-05 01:00:47.224923 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-05 01:00:47.224926 | orchestrator | Thursday 05 March 2026 00:59:56 +0000 (0:00:02.467) 0:10:56.060 ******** 2026-03-05 01:00:47.224930 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 01:00:47.224934 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.224938 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 01:00:47.224942 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.224945 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 01:00:47.224949 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.224953 | orchestrator | 2026-03-05 01:00:47.224957 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-05 01:00:47.224961 | orchestrator | Thursday 05 March 2026 00:59:58 +0000 (0:00:01.386) 0:10:57.446 ******** 2026-03-05 01:00:47.224964 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-05 01:00:47.224968 | orchestrator | 2026-03-05 01:00:47.224972 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-05 01:00:47.224976 | orchestrator | Thursday 05 March 2026 00:59:58 +0000 (0:00:00.232) 0:10:57.679 ******** 2026-03-05 01:00:47.224980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.224984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.224987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.224995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.224999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.225002 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.225006 | orchestrator | 2026-03-05 01:00:47.225013 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-05 01:00:47.225017 | orchestrator | Thursday 05 March 2026 00:59:59 +0000 (0:00:01.364) 0:10:59.044 ******** 2026-03-05 01:00:47.225020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.225024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.225028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.225032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.225036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-05 01:00:47.225041 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.225045 | orchestrator | 2026-03-05 01:00:47.225049 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-05 01:00:47.225053 | orchestrator | Thursday 05 March 2026 01:00:00 +0000 (0:00:00.693) 0:10:59.737 ******** 2026-03-05 01:00:47.225057 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 01:00:47.225061 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 01:00:47.225064 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 01:00:47.225068 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 01:00:47.225072 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-05 01:00:47.225076 | orchestrator | 2026-03-05 01:00:47.225080 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-05 01:00:47.225084 | orchestrator | Thursday 05 March 2026 01:00:31 +0000 (0:00:30.729) 0:11:30.467 ******** 2026-03-05 01:00:47.225088 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.225091 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.225095 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.225099 | orchestrator | 2026-03-05 01:00:47.225103 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-05 01:00:47.225107 | orchestrator | Thursday 05 March 2026 01:00:31 +0000 (0:00:00.381) 0:11:30.848 ******** 2026-03-05 01:00:47.225111 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.225114 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.225118 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.225122 | orchestrator | 2026-03-05 01:00:47.225126 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-05 01:00:47.225130 | orchestrator | Thursday 05 March 2026 01:00:31 +0000 (0:00:00.299) 0:11:31.147 ******** 2026-03-05 01:00:47.225158 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.225166 | orchestrator | 2026-03-05 01:00:47.225170 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-05 01:00:47.225174 | orchestrator | Thursday 05 March 2026 01:00:32 +0000 (0:00:00.861) 0:11:32.009 ******** 2026-03-05 01:00:47.225178 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.225181 | orchestrator | 2026-03-05 01:00:47.225185 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-05 01:00:47.225189 | orchestrator | Thursday 05 March 2026 01:00:33 +0000 (0:00:00.585) 0:11:32.594 ******** 2026-03-05 01:00:47.225193 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.225197 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.225200 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.225204 | orchestrator | 2026-03-05 01:00:47.225208 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-05 01:00:47.225212 | orchestrator | Thursday 05 March 2026 01:00:34 +0000 (0:00:01.362) 0:11:33.956 ******** 2026-03-05 01:00:47.225215 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.225219 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.225223 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.225227 | orchestrator | 2026-03-05 01:00:47.225230 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-05 01:00:47.225234 | orchestrator | Thursday 05 March 2026 01:00:36 +0000 (0:00:01.545) 0:11:35.502 ******** 2026-03-05 01:00:47.225238 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:00:47.225242 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:00:47.225246 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:00:47.225249 | orchestrator | 2026-03-05 01:00:47.225253 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-05 01:00:47.225257 | orchestrator | Thursday 05 March 2026 01:00:38 +0000 (0:00:01.922) 0:11:37.424 ******** 2026-03-05 01:00:47.225264 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.225268 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.225272 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-05 01:00:47.225275 | orchestrator | 2026-03-05 01:00:47.225279 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-05 01:00:47.225283 | orchestrator | Thursday 05 March 2026 01:00:40 +0000 (0:00:02.852) 0:11:40.277 ******** 2026-03-05 01:00:47.225287 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.225290 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.225295 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.225301 | orchestrator | 2026-03-05 01:00:47.225307 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-05 01:00:47.225312 | orchestrator | Thursday 05 March 2026 01:00:41 +0000 (0:00:00.361) 0:11:40.638 ******** 2026-03-05 01:00:47.225321 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:00:47.225327 | orchestrator | 2026-03-05 01:00:47.225333 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-05 01:00:47.225338 | orchestrator | Thursday 05 March 2026 01:00:41 +0000 (0:00:00.521) 0:11:41.160 ******** 2026-03-05 01:00:47.225343 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.225349 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.225357 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.225363 | orchestrator | 2026-03-05 01:00:47.225369 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-05 01:00:47.225375 | orchestrator | Thursday 05 March 2026 01:00:42 +0000 (0:00:00.592) 0:11:41.752 ******** 2026-03-05 01:00:47.225380 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.225392 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:00:47.225398 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:00:47.225404 | orchestrator | 2026-03-05 01:00:47.225410 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-05 01:00:47.225416 | orchestrator | Thursday 05 March 2026 01:00:42 +0000 (0:00:00.334) 0:11:42.087 ******** 2026-03-05 01:00:47.225422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:00:47.225429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:00:47.225434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:00:47.225439 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:00:47.225445 | orchestrator | 2026-03-05 01:00:47.225450 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-05 01:00:47.225456 | orchestrator | Thursday 05 March 2026 01:00:43 +0000 (0:00:00.667) 0:11:42.754 ******** 2026-03-05 01:00:47.225461 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:00:47.225467 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:00:47.225473 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:00:47.225480 | orchestrator | 2026-03-05 01:00:47.225486 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:00:47.225492 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-05 01:00:47.225499 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-05 01:00:47.225506 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-05 01:00:47.225512 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-05 01:00:47.225516 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-05 01:00:47.225520 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-05 01:00:47.225524 | orchestrator | 2026-03-05 01:00:47.225527 | orchestrator | 2026-03-05 01:00:47.225531 | orchestrator | 2026-03-05 01:00:47.225535 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:00:47.225539 | orchestrator | Thursday 05 March 2026 01:00:43 +0000 (0:00:00.272) 0:11:43.026 ******** 2026-03-05 01:00:47.225543 | orchestrator | =============================================================================== 2026-03-05 01:00:47.225547 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 44.34s 2026-03-05 01:00:47.225551 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 37.97s 2026-03-05 01:00:47.225554 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.46s 2026-03-05 01:00:47.225558 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.73s 2026-03-05 01:00:47.225562 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.99s 2026-03-05 01:00:47.225569 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.52s 2026-03-05 01:00:47.225573 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.55s 2026-03-05 01:00:47.225577 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.90s 2026-03-05 01:00:47.225580 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.18s 2026-03-05 01:00:47.225588 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.87s 2026-03-05 01:00:47.225592 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.52s 2026-03-05 01:00:47.225600 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.70s 2026-03-05 01:00:47.225604 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.63s 2026-03-05 01:00:47.225608 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.18s 2026-03-05 01:00:47.225612 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 4.87s 2026-03-05 01:00:47.225616 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.54s 2026-03-05 01:00:47.225619 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.35s 2026-03-05 01:00:47.225623 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.26s 2026-03-05 01:00:47.225627 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.18s 2026-03-05 01:00:47.225631 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 4.18s 2026-03-05 01:00:47.225637 | orchestrator | 2026-03-05 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:50.259226 | orchestrator | 2026-03-05 01:00:50 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:00:50.259698 | orchestrator | 2026-03-05 01:00:50 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:50.260745 | orchestrator | 2026-03-05 01:00:50 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:50.260789 | orchestrator | 2026-03-05 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:53.298086 | orchestrator | 2026-03-05 01:00:53 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:00:53.298373 | orchestrator | 2026-03-05 01:00:53 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:53.299699 | orchestrator | 2026-03-05 01:00:53 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:53.299741 | orchestrator | 2026-03-05 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:56.354532 | orchestrator | 2026-03-05 01:00:56 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:00:56.354745 | orchestrator | 2026-03-05 01:00:56 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:56.358093 | orchestrator | 2026-03-05 01:00:56 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:56.358267 | orchestrator | 2026-03-05 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:00:59.414387 | orchestrator | 2026-03-05 01:00:59 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:00:59.415715 | orchestrator | 2026-03-05 01:00:59 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:00:59.417273 | orchestrator | 2026-03-05 01:00:59 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:00:59.417322 | orchestrator | 2026-03-05 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:02.471536 | orchestrator | 2026-03-05 01:01:02 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:02.472756 | orchestrator | 2026-03-05 01:01:02 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:01:02.474740 | orchestrator | 2026-03-05 01:01:02 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:02.474800 | orchestrator | 2026-03-05 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:05.518832 | orchestrator | 2026-03-05 01:01:05 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:05.525260 | orchestrator | 2026-03-05 01:01:05 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:01:05.526492 | orchestrator | 2026-03-05 01:01:05 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:05.528101 | orchestrator | 2026-03-05 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:08.572250 | orchestrator | 2026-03-05 01:01:08 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:08.575434 | orchestrator | 2026-03-05 01:01:08 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:01:08.576624 | orchestrator | 2026-03-05 01:01:08 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:08.576887 | orchestrator | 2026-03-05 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:11.624005 | orchestrator | 2026-03-05 01:01:11 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:11.625269 | orchestrator | 2026-03-05 01:01:11 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:01:11.626009 | orchestrator | 2026-03-05 01:01:11 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:11.626098 | orchestrator | 2026-03-05 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:14.673392 | orchestrator | 2026-03-05 01:01:14 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:14.676439 | orchestrator | 2026-03-05 01:01:14 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state STARTED 2026-03-05 01:01:14.679376 | orchestrator | 2026-03-05 01:01:14 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:14.679485 | orchestrator | 2026-03-05 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:17.734413 | orchestrator | 2026-03-05 01:01:17 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:17.738833 | orchestrator | 2026-03-05 01:01:17 | INFO  | Task 967baff6-2275-4dd3-8782-2f979c3ca400 is in state SUCCESS 2026-03-05 01:01:17.741352 | orchestrator | 2026-03-05 01:01:17.741423 | orchestrator | 2026-03-05 01:01:17.741432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:01:17.741440 | orchestrator | 2026-03-05 01:01:17.741446 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:01:17.741452 | orchestrator | Thursday 05 March 2026 00:58:23 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-03-05 01:01:17.741459 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:17.741466 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:17.741472 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:17.741477 | orchestrator | 2026-03-05 01:01:17.741483 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:01:17.741489 | orchestrator | Thursday 05 March 2026 00:58:23 +0000 (0:00:00.295) 0:00:00.572 ******** 2026-03-05 01:01:17.741497 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-05 01:01:17.741504 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-05 01:01:17.741510 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-05 01:01:17.741517 | orchestrator | 2026-03-05 01:01:17.741523 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-05 01:01:17.741530 | orchestrator | 2026-03-05 01:01:17.741537 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-05 01:01:17.741668 | orchestrator | Thursday 05 March 2026 00:58:24 +0000 (0:00:00.443) 0:00:01.016 ******** 2026-03-05 01:01:17.741683 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:01:17.741714 | orchestrator | 2026-03-05 01:01:17.741721 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-05 01:01:17.741727 | orchestrator | Thursday 05 March 2026 00:58:24 +0000 (0:00:00.499) 0:00:01.515 ******** 2026-03-05 01:01:17.741734 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 01:01:17.741740 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 01:01:17.741746 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-05 01:01:17.741752 | orchestrator | 2026-03-05 01:01:17.741758 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-05 01:01:17.741764 | orchestrator | Thursday 05 March 2026 00:58:25 +0000 (0:00:00.745) 0:00:02.261 ******** 2026-03-05 01:01:17.741774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.741799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.741819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.741829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.741843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.741854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.741861 | orchestrator | 2026-03-05 01:01:17.741867 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-05 01:01:17.741873 | orchestrator | Thursday 05 March 2026 00:58:27 +0000 (0:00:01.863) 0:00:04.124 ******** 2026-03-05 01:01:17.741879 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:01:17.741885 | orchestrator | 2026-03-05 01:01:17.741891 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-05 01:01:17.741897 | orchestrator | Thursday 05 March 2026 00:58:27 +0000 (0:00:00.596) 0:00:04.721 ******** 2026-03-05 01:01:17.741912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.741925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.741932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.741942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.741955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.741966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.741973 | orchestrator | 2026-03-05 01:01:17.741979 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-05 01:01:17.741985 | orchestrator | Thursday 05 March 2026 00:58:30 +0000 (0:00:02.893) 0:00:07.615 ******** 2026-03-05 01:01:17.741993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 01:01:17.742001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 01:01:17.742005 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:17.742046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 01:01:17.742056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 01:01:17.742060 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:17.742064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 01:01:17.742071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 01:01:17.742076 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:17.742079 | orchestrator | 2026-03-05 01:01:17.742083 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-05 01:01:17.742087 | orchestrator | Thursday 05 March 2026 00:58:32 +0000 (0:00:01.467) 0:00:09.082 ******** 2026-03-05 01:01:17.742096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 01:01:17.742107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 01:01:17.742113 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:17.742120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 01:01:17.742131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 01:01:17.742181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-05 01:01:17.742195 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:17.742202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-05 01:01:17.742209 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:17.742216 | orchestrator | 2026-03-05 01:01:17.742222 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-05 01:01:17.742229 | orchestrator | Thursday 05 March 2026 00:58:33 +0000 (0:00:01.135) 0:00:10.218 ******** 2026-03-05 01:01:17.742236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.742248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.742255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.742278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.742286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.742298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.742305 | orchestrator | 2026-03-05 01:01:17.742312 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-05 01:01:17.742318 | orchestrator | Thursday 05 March 2026 00:58:35 +0000 (0:00:02.547) 0:00:12.765 ******** 2026-03-05 01:01:17.742325 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:17.742331 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:17.742344 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:17.742348 | orchestrator | 2026-03-05 01:01:17.742352 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-05 01:01:17.742356 | orchestrator | Thursday 05 March 2026 00:58:39 +0000 (0:00:03.220) 0:00:15.986 ******** 2026-03-05 01:01:17.742360 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:17.742364 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:17.742367 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:17.742371 | orchestrator | 2026-03-05 01:01:17.742375 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-05 01:01:17.742379 | orchestrator | Thursday 05 March 2026 00:58:41 +0000 (0:00:01.984) 0:00:17.970 ******** 2026-03-05 01:01:17.742389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.742394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.742398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-05 01:01:17.742406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.742417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.742422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-05 01:01:17.742427 | orchestrator | 2026-03-05 01:01:17.742433 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-05 01:01:17.742439 | orchestrator | Thursday 05 March 2026 00:58:43 +0000 (0:00:02.446) 0:00:20.416 ******** 2026-03-05 01:01:17.742444 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:17.742450 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:17.742456 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:17.742462 | orchestrator | 2026-03-05 01:01:17.742468 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-05 01:01:17.742474 | orchestrator | Thursday 05 March 2026 00:58:43 +0000 (0:00:00.282) 0:00:20.699 ******** 2026-03-05 01:01:17.742480 | orchestrator | 2026-03-05 01:01:17.742486 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-05 01:01:17.742493 | orchestrator | Thursday 05 March 2026 00:58:43 +0000 (0:00:00.074) 0:00:20.774 ******** 2026-03-05 01:01:17.742499 | orchestrator | 2026-03-05 01:01:17.742505 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-05 01:01:17.742511 | orchestrator | Thursday 05 March 2026 00:58:44 +0000 (0:00:00.081) 0:00:20.855 ******** 2026-03-05 01:01:17.742517 | orchestrator | 2026-03-05 01:01:17.742523 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-05 01:01:17.742529 | orchestrator | Thursday 05 March 2026 00:58:44 +0000 (0:00:00.072) 0:00:20.927 ******** 2026-03-05 01:01:17.742535 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:17.742546 | orchestrator | 2026-03-05 01:01:17.742552 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-05 01:01:17.742559 | orchestrator | Thursday 05 March 2026 00:58:44 +0000 (0:00:00.772) 0:00:21.699 ******** 2026-03-05 01:01:17.742565 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:17.742572 | orchestrator | 2026-03-05 01:01:17.742578 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-05 01:01:17.742585 | orchestrator | Thursday 05 March 2026 00:58:45 +0000 (0:00:00.193) 0:00:21.893 ******** 2026-03-05 01:01:17.742591 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:17.742598 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:17.742604 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:17.742607 | orchestrator | 2026-03-05 01:01:17.742611 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-05 01:01:17.742615 | orchestrator | Thursday 05 March 2026 00:59:51 +0000 (0:01:06.338) 0:01:28.232 ******** 2026-03-05 01:01:17.742622 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:17.742626 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:17.742630 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:17.742634 | orchestrator | 2026-03-05 01:01:17.742637 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-05 01:01:17.742641 | orchestrator | Thursday 05 March 2026 01:01:02 +0000 (0:01:11.486) 0:02:39.718 ******** 2026-03-05 01:01:17.742645 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:01:17.742649 | orchestrator | 2026-03-05 01:01:17.742653 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-05 01:01:17.742657 | orchestrator | Thursday 05 March 2026 01:01:03 +0000 (0:00:00.856) 0:02:40.575 ******** 2026-03-05 01:01:17.742661 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:17.742665 | orchestrator | 2026-03-05 01:01:17.742671 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-05 01:01:17.742677 | orchestrator | Thursday 05 March 2026 01:01:06 +0000 (0:00:02.764) 0:02:43.339 ******** 2026-03-05 01:01:17.742683 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:17.742689 | orchestrator | 2026-03-05 01:01:17.742695 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-05 01:01:17.742701 | orchestrator | Thursday 05 March 2026 01:01:09 +0000 (0:00:02.656) 0:02:45.996 ******** 2026-03-05 01:01:17.742707 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:17.742713 | orchestrator | 2026-03-05 01:01:17.742719 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-05 01:01:17.742725 | orchestrator | Thursday 05 March 2026 01:01:12 +0000 (0:00:03.064) 0:02:49.060 ******** 2026-03-05 01:01:17.742731 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:17.742737 | orchestrator | 2026-03-05 01:01:17.742747 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:01:17.742755 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:01:17.742763 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 01:01:17.742768 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-05 01:01:17.742775 | orchestrator | 2026-03-05 01:01:17.742781 | orchestrator | 2026-03-05 01:01:17.742788 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:01:17.742794 | orchestrator | Thursday 05 March 2026 01:01:15 +0000 (0:00:02.722) 0:02:51.782 ******** 2026-03-05 01:01:17.742800 | orchestrator | =============================================================================== 2026-03-05 01:01:17.742806 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 71.49s 2026-03-05 01:01:17.742818 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.34s 2026-03-05 01:01:17.742824 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.22s 2026-03-05 01:01:17.742830 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.06s 2026-03-05 01:01:17.742836 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.89s 2026-03-05 01:01:17.742842 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.76s 2026-03-05 01:01:17.742849 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.72s 2026-03-05 01:01:17.742855 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.66s 2026-03-05 01:01:17.742861 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.55s 2026-03-05 01:01:17.742867 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.45s 2026-03-05 01:01:17.742874 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.98s 2026-03-05 01:01:17.742880 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.86s 2026-03-05 01:01:17.742886 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.47s 2026-03-05 01:01:17.742892 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.14s 2026-03-05 01:01:17.742898 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.86s 2026-03-05 01:01:17.742903 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.77s 2026-03-05 01:01:17.742909 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.75s 2026-03-05 01:01:17.742915 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.60s 2026-03-05 01:01:17.742921 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-03-05 01:01:17.742927 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-03-05 01:01:17.743398 | orchestrator | 2026-03-05 01:01:17 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:17.744019 | orchestrator | 2026-03-05 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:20.799839 | orchestrator | 2026-03-05 01:01:20 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:20.800198 | orchestrator | 2026-03-05 01:01:20 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:20.800506 | orchestrator | 2026-03-05 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:23.856625 | orchestrator | 2026-03-05 01:01:23 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:23.859495 | orchestrator | 2026-03-05 01:01:23 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:23.859571 | orchestrator | 2026-03-05 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:26.901452 | orchestrator | 2026-03-05 01:01:26 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:26.904320 | orchestrator | 2026-03-05 01:01:26 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:26.904372 | orchestrator | 2026-03-05 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:29.953499 | orchestrator | 2026-03-05 01:01:29 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:29.953657 | orchestrator | 2026-03-05 01:01:29 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state STARTED 2026-03-05 01:01:29.953684 | orchestrator | 2026-03-05 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:32.998326 | orchestrator | 2026-03-05 01:01:32 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:33.014677 | orchestrator | 2026-03-05 01:01:33 | INFO  | Task 6a7f872b-1485-4f3b-a332-37b37c828259 is in state SUCCESS 2026-03-05 01:01:33.015690 | orchestrator | 2026-03-05 01:01:33.015733 | orchestrator | 2026-03-05 01:01:33.015741 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-05 01:01:33.015749 | orchestrator | 2026-03-05 01:01:33.015755 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-05 01:01:33.015763 | orchestrator | Thursday 05 March 2026 00:58:23 +0000 (0:00:00.096) 0:00:00.096 ******** 2026-03-05 01:01:33.015769 | orchestrator | ok: [localhost] => { 2026-03-05 01:01:33.015778 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-05 01:01:33.015784 | orchestrator | } 2026-03-05 01:01:33.015790 | orchestrator | 2026-03-05 01:01:33.015796 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-05 01:01:33.015802 | orchestrator | Thursday 05 March 2026 00:58:23 +0000 (0:00:00.050) 0:00:00.147 ******** 2026-03-05 01:01:33.015808 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-05 01:01:33.015816 | orchestrator | ...ignoring 2026-03-05 01:01:33.015822 | orchestrator | 2026-03-05 01:01:33.015828 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-05 01:01:33.015834 | orchestrator | Thursday 05 March 2026 00:58:26 +0000 (0:00:02.873) 0:00:03.021 ******** 2026-03-05 01:01:33.015840 | orchestrator | skipping: [localhost] 2026-03-05 01:01:33.015846 | orchestrator | 2026-03-05 01:01:33.015851 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-05 01:01:33.015857 | orchestrator | Thursday 05 March 2026 00:58:26 +0000 (0:00:00.054) 0:00:03.076 ******** 2026-03-05 01:01:33.015862 | orchestrator | ok: [localhost] 2026-03-05 01:01:33.015868 | orchestrator | 2026-03-05 01:01:33.015874 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:01:33.015880 | orchestrator | 2026-03-05 01:01:33.015885 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:01:33.015891 | orchestrator | Thursday 05 March 2026 00:58:26 +0000 (0:00:00.167) 0:00:03.243 ******** 2026-03-05 01:01:33.015897 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.015903 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:33.015909 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:33.015915 | orchestrator | 2026-03-05 01:01:33.015921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:01:33.015926 | orchestrator | Thursday 05 March 2026 00:58:26 +0000 (0:00:00.343) 0:00:03.587 ******** 2026-03-05 01:01:33.015932 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-05 01:01:33.015957 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-05 01:01:33.015970 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-05 01:01:33.015976 | orchestrator | 2026-03-05 01:01:33.015984 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-05 01:01:33.015990 | orchestrator | 2026-03-05 01:01:33.015996 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-05 01:01:33.016003 | orchestrator | Thursday 05 March 2026 00:58:27 +0000 (0:00:00.695) 0:00:04.282 ******** 2026-03-05 01:01:33.016009 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-05 01:01:33.016017 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-05 01:01:33.016023 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-05 01:01:33.016030 | orchestrator | 2026-03-05 01:01:33.016037 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-05 01:01:33.016044 | orchestrator | Thursday 05 March 2026 00:58:28 +0000 (0:00:00.425) 0:00:04.708 ******** 2026-03-05 01:01:33.016051 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:01:33.016079 | orchestrator | 2026-03-05 01:01:33.016085 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-05 01:01:33.016092 | orchestrator | Thursday 05 March 2026 00:58:28 +0000 (0:00:00.563) 0:00:05.271 ******** 2026-03-05 01:01:33.016136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 01:01:33.016174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 01:01:33.016187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 01:01:33.016200 | orchestrator | 2026-03-05 01:01:33.016213 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-05 01:01:33.016220 | orchestrator | Thursday 05 March 2026 00:58:32 +0000 (0:00:03.566) 0:00:08.837 ******** 2026-03-05 01:01:33.016226 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.016233 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.016239 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.016246 | orchestrator | 2026-03-05 01:01:33.016252 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-05 01:01:33.016258 | orchestrator | Thursday 05 March 2026 00:58:33 +0000 (0:00:00.946) 0:00:09.783 ******** 2026-03-05 01:01:33.016264 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.016270 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.016276 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.016283 | orchestrator | 2026-03-05 01:01:33.016290 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-05 01:01:33.016297 | orchestrator | Thursday 05 March 2026 00:58:34 +0000 (0:00:01.490) 0:00:11.273 ******** 2026-03-05 01:01:33.016306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 01:01:33.016329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 01:01:33.016338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 01:01:33.016351 | orchestrator | 2026-03-05 01:01:33.016359 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-05 01:01:33.016367 | orchestrator | Thursday 05 March 2026 00:58:38 +0000 (0:00:04.278) 0:00:15.552 ******** 2026-03-05 01:01:33.016375 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.016382 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.016390 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.016398 | orchestrator | 2026-03-05 01:01:33.016405 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-05 01:01:33.016413 | orchestrator | Thursday 05 March 2026 00:58:40 +0000 (0:00:01.189) 0:00:16.742 ******** 2026-03-05 01:01:33.016420 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.016427 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:33.016433 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:33.016440 | orchestrator | 2026-03-05 01:01:33.016447 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-05 01:01:33.016457 | orchestrator | Thursday 05 March 2026 00:58:44 +0000 (0:00:04.420) 0:00:21.162 ******** 2026-03-05 01:01:33.016464 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:01:33.016471 | orchestrator | 2026-03-05 01:01:33.016478 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-05 01:01:33.016485 | orchestrator | Thursday 05 March 2026 00:58:45 +0000 (0:00:00.552) 0:00:21.714 ******** 2026-03-05 01:01:33.016498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 01:01:33.016505 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.016512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 01:01:33.016525 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.016538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 01:01:33.016546 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.016553 | orchestrator | 2026-03-05 01:01:33.016560 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-05 01:01:33.016567 | orchestrator | Thursday 05 March 2026 00:58:49 +0000 (0:00:04.447) 0:00:26.162 ******** 2026-03-05 01:01:33.016601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 01:01:33.016624 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.016639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 01:01:33.016647 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.016653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 01:01:33.016664 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.016671 | orchestrator | 2026-03-05 01:01:33.016677 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-05 01:01:33.016684 | orchestrator | Thursday 05 March 2026 00:58:53 +0000 (0:00:03.622) 0:00:29.785 ******** 2026-03-05 01:01:33.016699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 01:01:33.016706 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.016712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 01:01:33.016724 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.016734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-05 01:01:33.016741 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.016747 | orchestrator | 2026-03-05 01:01:33.016754 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-05 01:01:33.016760 | orchestrator | Thursday 05 March 2026 00:58:56 +0000 (0:00:03.399) 0:00:33.185 ******** 2026-03-05 01:01:33.016771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 01:01:33.016789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 01:01:33.016802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-05 01:01:33.016814 | orchestrator | 2026-03-05 01:01:33.016821 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-05 01:01:33.016828 | orchestrator | Thursday 05 March 2026 00:59:00 +0000 (0:00:04.067) 0:00:37.253 ******** 2026-03-05 01:01:33.016835 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.016841 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:33.016847 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:33.016854 | orchestrator | 2026-03-05 01:01:33.016860 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-05 01:01:33.016866 | orchestrator | Thursday 05 March 2026 00:59:01 +0000 (0:00:01.125) 0:00:38.378 ******** 2026-03-05 01:01:33.016873 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.016880 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:33.016887 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:33.016894 | orchestrator | 2026-03-05 01:01:33.016900 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-05 01:01:33.016907 | orchestrator | Thursday 05 March 2026 00:59:02 +0000 (0:00:00.322) 0:00:38.701 ******** 2026-03-05 01:01:33.016913 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.016920 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:33.016926 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:33.016932 | orchestrator | 2026-03-05 01:01:33.016939 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-05 01:01:33.016945 | orchestrator | Thursday 05 March 2026 00:59:02 +0000 (0:00:00.565) 0:00:39.266 ******** 2026-03-05 01:01:33.016954 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-05 01:01:33.016961 | orchestrator | ...ignoring 2026-03-05 01:01:33.016968 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-05 01:01:33.016975 | orchestrator | ...ignoring 2026-03-05 01:01:33.016986 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-05 01:01:33.016993 | orchestrator | ...ignoring 2026-03-05 01:01:33.017000 | orchestrator | 2026-03-05 01:01:33.017006 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-05 01:01:33.017013 | orchestrator | Thursday 05 March 2026 00:59:13 +0000 (0:00:11.079) 0:00:50.346 ******** 2026-03-05 01:01:33.017020 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.017027 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:33.017033 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:33.017040 | orchestrator | 2026-03-05 01:01:33.017047 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-05 01:01:33.017054 | orchestrator | Thursday 05 March 2026 00:59:14 +0000 (0:00:00.466) 0:00:50.813 ******** 2026-03-05 01:01:33.017061 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.017073 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.017080 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.017086 | orchestrator | 2026-03-05 01:01:33.017092 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-05 01:01:33.017099 | orchestrator | Thursday 05 March 2026 00:59:14 +0000 (0:00:00.723) 0:00:51.537 ******** 2026-03-05 01:01:33.017106 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.017112 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.017119 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.017125 | orchestrator | 2026-03-05 01:01:33.017130 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-05 01:01:33.017136 | orchestrator | Thursday 05 March 2026 00:59:15 +0000 (0:00:00.553) 0:00:52.090 ******** 2026-03-05 01:01:33.017191 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.017200 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.017207 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.017213 | orchestrator | 2026-03-05 01:01:33.017220 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-05 01:01:33.017232 | orchestrator | Thursday 05 March 2026 00:59:15 +0000 (0:00:00.497) 0:00:52.587 ******** 2026-03-05 01:01:33.017240 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.017246 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:33.017253 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:33.017260 | orchestrator | 2026-03-05 01:01:33.017267 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-05 01:01:33.017274 | orchestrator | Thursday 05 March 2026 00:59:16 +0000 (0:00:00.465) 0:00:53.053 ******** 2026-03-05 01:01:33.017281 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.017288 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.017294 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.017301 | orchestrator | 2026-03-05 01:01:33.017308 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-05 01:01:33.017315 | orchestrator | Thursday 05 March 2026 00:59:17 +0000 (0:00:00.670) 0:00:53.723 ******** 2026-03-05 01:01:33.017321 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.017328 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.017335 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-05 01:01:33.017341 | orchestrator | 2026-03-05 01:01:33.017348 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-05 01:01:33.017355 | orchestrator | Thursday 05 March 2026 00:59:17 +0000 (0:00:00.414) 0:00:54.138 ******** 2026-03-05 01:01:33.017361 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.017368 | orchestrator | 2026-03-05 01:01:33.017374 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-05 01:01:33.017381 | orchestrator | Thursday 05 March 2026 00:59:27 +0000 (0:00:10.359) 0:01:04.497 ******** 2026-03-05 01:01:33.017387 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.017394 | orchestrator | 2026-03-05 01:01:33.017400 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-05 01:01:33.017406 | orchestrator | Thursday 05 March 2026 00:59:28 +0000 (0:00:00.143) 0:01:04.641 ******** 2026-03-05 01:01:33.017412 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.017419 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.017426 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.017432 | orchestrator | 2026-03-05 01:01:33.017439 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-05 01:01:33.017445 | orchestrator | Thursday 05 March 2026 00:59:29 +0000 (0:00:01.095) 0:01:05.737 ******** 2026-03-05 01:01:33.017452 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.017459 | orchestrator | 2026-03-05 01:01:33.017465 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-05 01:01:33.017472 | orchestrator | Thursday 05 March 2026 00:59:37 +0000 (0:00:08.251) 0:01:13.988 ******** 2026-03-05 01:01:33.017487 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.017493 | orchestrator | 2026-03-05 01:01:33.017499 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-05 01:01:33.017506 | orchestrator | Thursday 05 March 2026 00:59:39 +0000 (0:00:01.657) 0:01:15.646 ******** 2026-03-05 01:01:33.017512 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.017519 | orchestrator | 2026-03-05 01:01:33.017525 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-05 01:01:33.017531 | orchestrator | Thursday 05 March 2026 00:59:41 +0000 (0:00:02.778) 0:01:18.424 ******** 2026-03-05 01:01:33.017538 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.017544 | orchestrator | 2026-03-05 01:01:33.017551 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-05 01:01:33.017557 | orchestrator | Thursday 05 March 2026 00:59:41 +0000 (0:00:00.116) 0:01:18.541 ******** 2026-03-05 01:01:33.017564 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.017570 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.017576 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.017583 | orchestrator | 2026-03-05 01:01:33.017589 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-05 01:01:33.017596 | orchestrator | Thursday 05 March 2026 00:59:42 +0000 (0:00:00.346) 0:01:18.888 ******** 2026-03-05 01:01:33.017603 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.017610 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-05 01:01:33.017622 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:33.017629 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:33.017636 | orchestrator | 2026-03-05 01:01:33.017642 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-05 01:01:33.017649 | orchestrator | skipping: no hosts matched 2026-03-05 01:01:33.017655 | orchestrator | 2026-03-05 01:01:33.017662 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-05 01:01:33.017669 | orchestrator | 2026-03-05 01:01:33.017676 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-05 01:01:33.017683 | orchestrator | Thursday 05 March 2026 00:59:42 +0000 (0:00:00.569) 0:01:19.457 ******** 2026-03-05 01:01:33.017689 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:01:33.017695 | orchestrator | 2026-03-05 01:01:33.017702 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-05 01:01:33.017708 | orchestrator | Thursday 05 March 2026 01:00:00 +0000 (0:00:17.481) 0:01:36.939 ******** 2026-03-05 01:01:33.017714 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:33.017721 | orchestrator | 2026-03-05 01:01:33.017728 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-05 01:01:33.017734 | orchestrator | Thursday 05 March 2026 01:00:16 +0000 (0:00:16.631) 0:01:53.570 ******** 2026-03-05 01:01:33.017741 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:33.017748 | orchestrator | 2026-03-05 01:01:33.017754 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-05 01:01:33.017761 | orchestrator | 2026-03-05 01:01:33.017767 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-05 01:01:33.017774 | orchestrator | Thursday 05 March 2026 01:00:19 +0000 (0:00:02.741) 0:01:56.312 ******** 2026-03-05 01:01:33.017781 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:01:33.017787 | orchestrator | 2026-03-05 01:01:33.017794 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-05 01:01:33.017806 | orchestrator | Thursday 05 March 2026 01:00:37 +0000 (0:00:18.305) 0:02:14.618 ******** 2026-03-05 01:01:33.017813 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:33.017819 | orchestrator | 2026-03-05 01:01:33.017826 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-05 01:01:33.017832 | orchestrator | Thursday 05 March 2026 01:00:53 +0000 (0:00:15.604) 0:02:30.222 ******** 2026-03-05 01:01:33.017839 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:33.017846 | orchestrator | 2026-03-05 01:01:33.017860 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-05 01:01:33.017867 | orchestrator | 2026-03-05 01:01:33.017873 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-05 01:01:33.017880 | orchestrator | Thursday 05 March 2026 01:00:56 +0000 (0:00:02.588) 0:02:32.811 ******** 2026-03-05 01:01:33.017887 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.017893 | orchestrator | 2026-03-05 01:01:33.017900 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-05 01:01:33.017906 | orchestrator | Thursday 05 March 2026 01:01:14 +0000 (0:00:18.018) 0:02:50.830 ******** 2026-03-05 01:01:33.017913 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.017920 | orchestrator | 2026-03-05 01:01:33.017926 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-05 01:01:33.017933 | orchestrator | Thursday 05 March 2026 01:01:14 +0000 (0:00:00.577) 0:02:51.408 ******** 2026-03-05 01:01:33.017939 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.017946 | orchestrator | 2026-03-05 01:01:33.017953 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-05 01:01:33.017960 | orchestrator | 2026-03-05 01:01:33.017966 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-05 01:01:33.017973 | orchestrator | Thursday 05 March 2026 01:01:17 +0000 (0:00:03.014) 0:02:54.422 ******** 2026-03-05 01:01:33.017980 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:01:33.017986 | orchestrator | 2026-03-05 01:01:33.017993 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-05 01:01:33.017999 | orchestrator | Thursday 05 March 2026 01:01:18 +0000 (0:00:00.558) 0:02:54.980 ******** 2026-03-05 01:01:33.018005 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.018011 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.018078 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.018085 | orchestrator | 2026-03-05 01:01:33.018092 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-05 01:01:33.018099 | orchestrator | Thursday 05 March 2026 01:01:20 +0000 (0:00:02.608) 0:02:57.589 ******** 2026-03-05 01:01:33.018106 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.018113 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.018120 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.018125 | orchestrator | 2026-03-05 01:01:33.018131 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-05 01:01:33.018137 | orchestrator | Thursday 05 March 2026 01:01:23 +0000 (0:00:02.496) 0:03:00.086 ******** 2026-03-05 01:01:33.018204 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.018212 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.018218 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.018224 | orchestrator | 2026-03-05 01:01:33.018231 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-05 01:01:33.018237 | orchestrator | Thursday 05 March 2026 01:01:25 +0000 (0:00:02.493) 0:03:02.579 ******** 2026-03-05 01:01:33.018244 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.018250 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.018257 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:01:33.018263 | orchestrator | 2026-03-05 01:01:33.018270 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-05 01:01:33.018277 | orchestrator | Thursday 05 March 2026 01:01:28 +0000 (0:00:02.663) 0:03:05.243 ******** 2026-03-05 01:01:33.018283 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:01:33.018291 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:01:33.018297 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:01:33.018304 | orchestrator | 2026-03-05 01:01:33.018311 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-05 01:01:33.018317 | orchestrator | Thursday 05 March 2026 01:01:32 +0000 (0:00:03.396) 0:03:08.640 ******** 2026-03-05 01:01:33.018323 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:01:33.018351 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:01:33.018358 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:01:33.018364 | orchestrator | 2026-03-05 01:01:33.018370 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:01:33.018378 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-05 01:01:33.018386 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-05 01:01:33.018395 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-05 01:01:33.018402 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-05 01:01:33.018408 | orchestrator | 2026-03-05 01:01:33.018415 | orchestrator | 2026-03-05 01:01:33.018422 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:01:33.018428 | orchestrator | Thursday 05 March 2026 01:01:32 +0000 (0:00:00.237) 0:03:08.877 ******** 2026-03-05 01:01:33.018435 | orchestrator | =============================================================================== 2026-03-05 01:01:33.018442 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.79s 2026-03-05 01:01:33.018448 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.24s 2026-03-05 01:01:33.018464 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 18.02s 2026-03-05 01:01:33.018470 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.08s 2026-03-05 01:01:33.018477 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.36s 2026-03-05 01:01:33.018484 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.25s 2026-03-05 01:01:33.018491 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.33s 2026-03-05 01:01:33.018497 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.45s 2026-03-05 01:01:33.018504 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.42s 2026-03-05 01:01:33.018510 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.28s 2026-03-05 01:01:33.018517 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.07s 2026-03-05 01:01:33.018523 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.62s 2026-03-05 01:01:33.018531 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.57s 2026-03-05 01:01:33.018537 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.40s 2026-03-05 01:01:33.018543 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.40s 2026-03-05 01:01:33.018550 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.01s 2026-03-05 01:01:33.018556 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2026-03-05 01:01:33.018563 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.78s 2026-03-05 01:01:33.018570 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.66s 2026-03-05 01:01:33.018576 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.61s 2026-03-05 01:01:33.018583 | orchestrator | 2026-03-05 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:36.066760 | orchestrator | 2026-03-05 01:01:36 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:01:36.068612 | orchestrator | 2026-03-05 01:01:36 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:36.071078 | orchestrator | 2026-03-05 01:01:36 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:01:36.071222 | orchestrator | 2026-03-05 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:39.124673 | orchestrator | 2026-03-05 01:01:39 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:01:39.125676 | orchestrator | 2026-03-05 01:01:39 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:39.127707 | orchestrator | 2026-03-05 01:01:39 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:01:39.127754 | orchestrator | 2026-03-05 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:42.170597 | orchestrator | 2026-03-05 01:01:42 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:01:42.171665 | orchestrator | 2026-03-05 01:01:42 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:42.173376 | orchestrator | 2026-03-05 01:01:42 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:01:42.173444 | orchestrator | 2026-03-05 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:45.224776 | orchestrator | 2026-03-05 01:01:45 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:01:45.230440 | orchestrator | 2026-03-05 01:01:45 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:45.236389 | orchestrator | 2026-03-05 01:01:45 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:01:45.236472 | orchestrator | 2026-03-05 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:48.281610 | orchestrator | 2026-03-05 01:01:48 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:01:48.284442 | orchestrator | 2026-03-05 01:01:48 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:48.286556 | orchestrator | 2026-03-05 01:01:48 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:01:48.286689 | orchestrator | 2026-03-05 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:51.337527 | orchestrator | 2026-03-05 01:01:51 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:01:51.338766 | orchestrator | 2026-03-05 01:01:51 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:51.339499 | orchestrator | 2026-03-05 01:01:51 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:01:51.339551 | orchestrator | 2026-03-05 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:54.387817 | orchestrator | 2026-03-05 01:01:54 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:01:54.389516 | orchestrator | 2026-03-05 01:01:54 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:54.390527 | orchestrator | 2026-03-05 01:01:54 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:01:54.390579 | orchestrator | 2026-03-05 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:01:57.440215 | orchestrator | 2026-03-05 01:01:57 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:01:57.442668 | orchestrator | 2026-03-05 01:01:57 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:01:57.444013 | orchestrator | 2026-03-05 01:01:57 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:01:57.444080 | orchestrator | 2026-03-05 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:00.469596 | orchestrator | 2026-03-05 01:02:00 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:00.469697 | orchestrator | 2026-03-05 01:02:00 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:00.470405 | orchestrator | 2026-03-05 01:02:00 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:00.470450 | orchestrator | 2026-03-05 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:03.502797 | orchestrator | 2026-03-05 01:02:03 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:03.503118 | orchestrator | 2026-03-05 01:02:03 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:03.504015 | orchestrator | 2026-03-05 01:02:03 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:03.504098 | orchestrator | 2026-03-05 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:06.545414 | orchestrator | 2026-03-05 01:02:06 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:06.547264 | orchestrator | 2026-03-05 01:02:06 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:06.549266 | orchestrator | 2026-03-05 01:02:06 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:06.549311 | orchestrator | 2026-03-05 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:09.592862 | orchestrator | 2026-03-05 01:02:09 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:09.594856 | orchestrator | 2026-03-05 01:02:09 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:09.596789 | orchestrator | 2026-03-05 01:02:09 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:09.596907 | orchestrator | 2026-03-05 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:12.644096 | orchestrator | 2026-03-05 01:02:12 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:12.645440 | orchestrator | 2026-03-05 01:02:12 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:12.647934 | orchestrator | 2026-03-05 01:02:12 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:12.648012 | orchestrator | 2026-03-05 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:15.696619 | orchestrator | 2026-03-05 01:02:15 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:15.699691 | orchestrator | 2026-03-05 01:02:15 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:15.701270 | orchestrator | 2026-03-05 01:02:15 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:15.701419 | orchestrator | 2026-03-05 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:18.741345 | orchestrator | 2026-03-05 01:02:18 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:18.742967 | orchestrator | 2026-03-05 01:02:18 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:18.745218 | orchestrator | 2026-03-05 01:02:18 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:18.745269 | orchestrator | 2026-03-05 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:21.785489 | orchestrator | 2026-03-05 01:02:21 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:21.786933 | orchestrator | 2026-03-05 01:02:21 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:21.788352 | orchestrator | 2026-03-05 01:02:21 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:21.788393 | orchestrator | 2026-03-05 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:24.835948 | orchestrator | 2026-03-05 01:02:24 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:24.838356 | orchestrator | 2026-03-05 01:02:24 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:24.840917 | orchestrator | 2026-03-05 01:02:24 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:24.840981 | orchestrator | 2026-03-05 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:27.881833 | orchestrator | 2026-03-05 01:02:27 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:27.881941 | orchestrator | 2026-03-05 01:02:27 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:27.884666 | orchestrator | 2026-03-05 01:02:27 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:27.884718 | orchestrator | 2026-03-05 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:30.943674 | orchestrator | 2026-03-05 01:02:30 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:30.945263 | orchestrator | 2026-03-05 01:02:30 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:30.947617 | orchestrator | 2026-03-05 01:02:30 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:30.947683 | orchestrator | 2026-03-05 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:33.995994 | orchestrator | 2026-03-05 01:02:33 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:33.997746 | orchestrator | 2026-03-05 01:02:33 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:33.999300 | orchestrator | 2026-03-05 01:02:34 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:33.999366 | orchestrator | 2026-03-05 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:37.037844 | orchestrator | 2026-03-05 01:02:37 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:37.039061 | orchestrator | 2026-03-05 01:02:37 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:37.041637 | orchestrator | 2026-03-05 01:02:37 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:37.041714 | orchestrator | 2026-03-05 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:40.081528 | orchestrator | 2026-03-05 01:02:40 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:40.083250 | orchestrator | 2026-03-05 01:02:40 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:40.085254 | orchestrator | 2026-03-05 01:02:40 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:40.085312 | orchestrator | 2026-03-05 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:43.130114 | orchestrator | 2026-03-05 01:02:43 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:43.132958 | orchestrator | 2026-03-05 01:02:43 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:43.134608 | orchestrator | 2026-03-05 01:02:43 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:43.134674 | orchestrator | 2026-03-05 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:46.201346 | orchestrator | 2026-03-05 01:02:46 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:46.203774 | orchestrator | 2026-03-05 01:02:46 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:46.205759 | orchestrator | 2026-03-05 01:02:46 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:46.205829 | orchestrator | 2026-03-05 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:49.259769 | orchestrator | 2026-03-05 01:02:49 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:49.262708 | orchestrator | 2026-03-05 01:02:49 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:49.265818 | orchestrator | 2026-03-05 01:02:49 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:49.265893 | orchestrator | 2026-03-05 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:52.312090 | orchestrator | 2026-03-05 01:02:52 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:52.314249 | orchestrator | 2026-03-05 01:02:52 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:52.317075 | orchestrator | 2026-03-05 01:02:52 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:52.317105 | orchestrator | 2026-03-05 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:55.365340 | orchestrator | 2026-03-05 01:02:55 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:55.365425 | orchestrator | 2026-03-05 01:02:55 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:55.368271 | orchestrator | 2026-03-05 01:02:55 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:55.368333 | orchestrator | 2026-03-05 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:02:58.431436 | orchestrator | 2026-03-05 01:02:58 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:02:58.433354 | orchestrator | 2026-03-05 01:02:58 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:02:58.435643 | orchestrator | 2026-03-05 01:02:58 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:02:58.435700 | orchestrator | 2026-03-05 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:01.476806 | orchestrator | 2026-03-05 01:03:01 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:01.477731 | orchestrator | 2026-03-05 01:03:01 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state STARTED 2026-03-05 01:03:01.480487 | orchestrator | 2026-03-05 01:03:01 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:03:01.480557 | orchestrator | 2026-03-05 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:04.539442 | orchestrator | 2026-03-05 01:03:04 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:04.542890 | orchestrator | 2026-03-05 01:03:04.542973 | orchestrator | 2026-03-05 01:03:04 | INFO  | Task 99b5581e-e94a-4030-864f-ea70926d07c1 is in state SUCCESS 2026-03-05 01:03:04.544421 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 01:03:04.544481 | orchestrator | 2.16.14 2026-03-05 01:03:04.544488 | orchestrator | 2026-03-05 01:03:04.544505 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-05 01:03:04.544510 | orchestrator | 2026-03-05 01:03:04.544514 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-05 01:03:04.544520 | orchestrator | Thursday 05 March 2026 01:00:48 +0000 (0:00:00.588) 0:00:00.588 ******** 2026-03-05 01:03:04.544524 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:03:04.544529 | orchestrator | 2026-03-05 01:03:04.544533 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-05 01:03:04.544537 | orchestrator | Thursday 05 March 2026 01:00:49 +0000 (0:00:00.678) 0:00:01.267 ******** 2026-03-05 01:03:04.544541 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.544545 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.544549 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.544553 | orchestrator | 2026-03-05 01:03:04.544557 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-05 01:03:04.544561 | orchestrator | Thursday 05 March 2026 01:00:50 +0000 (0:00:00.625) 0:00:01.892 ******** 2026-03-05 01:03:04.544565 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.544569 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.544573 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.544675 | orchestrator | 2026-03-05 01:03:04.544684 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-05 01:03:04.544690 | orchestrator | Thursday 05 March 2026 01:00:50 +0000 (0:00:00.309) 0:00:02.202 ******** 2026-03-05 01:03:04.544696 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.544702 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.544707 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.544743 | orchestrator | 2026-03-05 01:03:04.544750 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-05 01:03:04.544756 | orchestrator | Thursday 05 March 2026 01:00:51 +0000 (0:00:00.891) 0:00:03.093 ******** 2026-03-05 01:03:04.544762 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.544767 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.544774 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.544780 | orchestrator | 2026-03-05 01:03:04.544787 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-05 01:03:04.544793 | orchestrator | Thursday 05 March 2026 01:00:51 +0000 (0:00:00.340) 0:00:03.433 ******** 2026-03-05 01:03:04.544873 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.544878 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.544882 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.544886 | orchestrator | 2026-03-05 01:03:04.544890 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-05 01:03:04.544894 | orchestrator | Thursday 05 March 2026 01:00:51 +0000 (0:00:00.321) 0:00:03.754 ******** 2026-03-05 01:03:04.544898 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.544902 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.544906 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.544910 | orchestrator | 2026-03-05 01:03:04.544914 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-05 01:03:04.545131 | orchestrator | Thursday 05 March 2026 01:00:52 +0000 (0:00:00.318) 0:00:04.073 ******** 2026-03-05 01:03:04.545179 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545185 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.545190 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.545195 | orchestrator | 2026-03-05 01:03:04.545200 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-05 01:03:04.545220 | orchestrator | Thursday 05 March 2026 01:00:52 +0000 (0:00:00.523) 0:00:04.597 ******** 2026-03-05 01:03:04.545225 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.545230 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.545234 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.545239 | orchestrator | 2026-03-05 01:03:04.545244 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-05 01:03:04.545249 | orchestrator | Thursday 05 March 2026 01:00:53 +0000 (0:00:00.310) 0:00:04.907 ******** 2026-03-05 01:03:04.545254 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:03:04.545259 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:03:04.545263 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:03:04.545268 | orchestrator | 2026-03-05 01:03:04.545272 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-05 01:03:04.545277 | orchestrator | Thursday 05 March 2026 01:00:53 +0000 (0:00:00.653) 0:00:05.560 ******** 2026-03-05 01:03:04.545282 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.545287 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.545292 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.545296 | orchestrator | 2026-03-05 01:03:04.545301 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-05 01:03:04.545306 | orchestrator | Thursday 05 March 2026 01:00:54 +0000 (0:00:00.468) 0:00:06.029 ******** 2026-03-05 01:03:04.545311 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:03:04.545315 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:03:04.545320 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:03:04.545324 | orchestrator | 2026-03-05 01:03:04.545329 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-05 01:03:04.545334 | orchestrator | Thursday 05 March 2026 01:00:56 +0000 (0:00:02.167) 0:00:08.197 ******** 2026-03-05 01:03:04.545339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 01:03:04.545344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 01:03:04.545349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 01:03:04.545353 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545358 | orchestrator | 2026-03-05 01:03:04.545373 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-05 01:03:04.545377 | orchestrator | Thursday 05 March 2026 01:00:57 +0000 (0:00:00.665) 0:00:08.863 ******** 2026-03-05 01:03:04.545389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.545396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.545400 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.545404 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545408 | orchestrator | 2026-03-05 01:03:04.545412 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-05 01:03:04.545416 | orchestrator | Thursday 05 March 2026 01:00:58 +0000 (0:00:00.944) 0:00:09.807 ******** 2026-03-05 01:03:04.545422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.545432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.545436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.545440 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545444 | orchestrator | 2026-03-05 01:03:04.545448 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-05 01:03:04.545452 | orchestrator | Thursday 05 March 2026 01:00:58 +0000 (0:00:00.394) 0:00:10.201 ******** 2026-03-05 01:03:04.545621 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5c4e81ddfd7c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-05 01:00:54.949299', 'end': '2026-03-05 01:00:54.988509', 'delta': '0:00:00.039210', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c4e81ddfd7c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-05 01:03:04.545633 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '94084a850608', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-05 01:00:55.736670', 'end': '2026-03-05 01:00:55.772007', 'delta': '0:00:00.035337', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['94084a850608'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-05 01:03:04.545658 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '80de5c531700', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-05 01:00:56.251693', 'end': '2026-03-05 01:00:56.304522', 'delta': '0:00:00.052829', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80de5c531700'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-05 01:03:04.545663 | orchestrator | 2026-03-05 01:03:04.545667 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-05 01:03:04.545673 | orchestrator | Thursday 05 March 2026 01:00:58 +0000 (0:00:00.214) 0:00:10.416 ******** 2026-03-05 01:03:04.545686 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.545697 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.545705 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.545711 | orchestrator | 2026-03-05 01:03:04.545716 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-05 01:03:04.545722 | orchestrator | Thursday 05 March 2026 01:00:59 +0000 (0:00:00.481) 0:00:10.898 ******** 2026-03-05 01:03:04.545727 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-05 01:03:04.545734 | orchestrator | 2026-03-05 01:03:04.545740 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-05 01:03:04.545746 | orchestrator | Thursday 05 March 2026 01:01:00 +0000 (0:00:01.721) 0:00:12.619 ******** 2026-03-05 01:03:04.545752 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545758 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.545764 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.545771 | orchestrator | 2026-03-05 01:03:04.545778 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-05 01:03:04.545784 | orchestrator | Thursday 05 March 2026 01:01:01 +0000 (0:00:00.300) 0:00:12.920 ******** 2026-03-05 01:03:04.545791 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545797 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.545803 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.545809 | orchestrator | 2026-03-05 01:03:04.545816 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-05 01:03:04.545822 | orchestrator | Thursday 05 March 2026 01:01:01 +0000 (0:00:00.454) 0:00:13.375 ******** 2026-03-05 01:03:04.545827 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545833 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.545840 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.545847 | orchestrator | 2026-03-05 01:03:04.545854 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-05 01:03:04.545860 | orchestrator | Thursday 05 March 2026 01:01:02 +0000 (0:00:00.536) 0:00:13.911 ******** 2026-03-05 01:03:04.545867 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.545873 | orchestrator | 2026-03-05 01:03:04.545881 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-05 01:03:04.545885 | orchestrator | Thursday 05 March 2026 01:01:02 +0000 (0:00:00.149) 0:00:14.061 ******** 2026-03-05 01:03:04.545889 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545893 | orchestrator | 2026-03-05 01:03:04.545896 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-05 01:03:04.545900 | orchestrator | Thursday 05 March 2026 01:01:02 +0000 (0:00:00.239) 0:00:14.300 ******** 2026-03-05 01:03:04.545904 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545908 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.545912 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.545916 | orchestrator | 2026-03-05 01:03:04.545919 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-05 01:03:04.545923 | orchestrator | Thursday 05 March 2026 01:01:02 +0000 (0:00:00.391) 0:00:14.692 ******** 2026-03-05 01:03:04.545927 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545931 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.545935 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.545939 | orchestrator | 2026-03-05 01:03:04.545942 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-05 01:03:04.545946 | orchestrator | Thursday 05 March 2026 01:01:03 +0000 (0:00:00.402) 0:00:15.095 ******** 2026-03-05 01:03:04.545950 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545954 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.545958 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.545962 | orchestrator | 2026-03-05 01:03:04.545966 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-05 01:03:04.545970 | orchestrator | Thursday 05 March 2026 01:01:03 +0000 (0:00:00.597) 0:00:15.692 ******** 2026-03-05 01:03:04.545979 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.545983 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.545987 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.545991 | orchestrator | 2026-03-05 01:03:04.545994 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-05 01:03:04.545998 | orchestrator | Thursday 05 March 2026 01:01:04 +0000 (0:00:00.412) 0:00:16.105 ******** 2026-03-05 01:03:04.546002 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.546006 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.546010 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.546051 | orchestrator | 2026-03-05 01:03:04.546055 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-05 01:03:04.546059 | orchestrator | Thursday 05 March 2026 01:01:04 +0000 (0:00:00.447) 0:00:16.553 ******** 2026-03-05 01:03:04.546063 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.546067 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.546071 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.546074 | orchestrator | 2026-03-05 01:03:04.546101 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-05 01:03:04.546110 | orchestrator | Thursday 05 March 2026 01:01:05 +0000 (0:00:00.325) 0:00:16.878 ******** 2026-03-05 01:03:04.546115 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.546119 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.546122 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.546126 | orchestrator | 2026-03-05 01:03:04.546130 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-05 01:03:04.546178 | orchestrator | Thursday 05 March 2026 01:01:05 +0000 (0:00:00.565) 0:00:17.444 ******** 2026-03-05 01:03:04.546185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e61642d--a609--5f4c--883e--a16b698ed397-osd--block--8e61642d--a609--5f4c--883e--a16b698ed397', 'dm-uuid-LVM-LbLRM4MoU7LrtCpLRhZ98aBrXC5CKd9TorD81YopypD0x28jJAK8Hq9clofUSZiz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1a9c38f8--c56f--5625--8ade--2e45962405d2-osd--block--1a9c38f8--c56f--5625--8ade--2e45962405d2', 'dm-uuid-LVM-Awd6JFTEZhabPZZ269I3lfUatL84usmfNrJzp1u0OfKxZ9ov2M1W0FL1CTfuuxfS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--487cf15b--a3c4--55bb--8565--d1e78d85d824-osd--block--487cf15b--a3c4--55bb--8565--d1e78d85d824', 'dm-uuid-LVM-Dt8XNnSQe3wlln96iskXeizrfvxQBhuXH3Sg7aJ4PaS3fhgGCsS4rDsTtuxSPKbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8e61642d--a609--5f4c--883e--a16b698ed397-osd--block--8e61642d--a609--5f4c--883e--a16b698ed397'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AFl9q1-L64n-Gj7c-kBPf-4pLx-6hdv-2dXo3s', 'scsi-0QEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4', 'scsi-SQEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04f48836--d47d--5181--a61a--7e2c62572595-osd--block--04f48836--d47d--5181--a61a--7e2c62572595', 'dm-uuid-LVM-OJN9tS92YMA7b805RALhO0UBIRFqsk88oV19gmqjAodf7KHfSG0FCr1O8vHcprn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1a9c38f8--c56f--5625--8ade--2e45962405d2-osd--block--1a9c38f8--c56f--5625--8ade--2e45962405d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B01kgl-LOeW-EjUU-UANj-Hb1R-VO9H-0ZSNyu', 'scsi-0QEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b', 'scsi-SQEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada', 'scsi-SQEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546400 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.546405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bb27c3c1--5e00--588a--af48--66c3e9a20c72-osd--block--bb27c3c1--5e00--588a--af48--66c3e9a20c72', 'dm-uuid-LVM-aAhEHT9pjwGSpfrIrtjDtv5kGox94UV3Hcd8aIBrz2VbIQnyCRFrxK1WBmY4wZuT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52eeae7c--0ac3--5716--aafe--40e466221a22-osd--block--52eeae7c--0ac3--5716--aafe--40e466221a22', 'dm-uuid-LVM-yisYFX54apoGhi6gycsqiSU5w2pvRttzJJr37NcZ9qiTzIf7Tb0paCfHpcE4eNSQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--487cf15b--a3c4--55bb--8565--d1e78d85d824-osd--block--487cf15b--a3c4--55bb--8565--d1e78d85d824'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2gyAHZ-CD5F-8jUg-pmWW-VCFj-v7X8-fe5qeY', 'scsi-0QEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980', 'scsi-SQEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--04f48836--d47d--5181--a61a--7e2c62572595-osd--block--04f48836--d47d--5181--a61a--7e2c62572595'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PS3SUM-PYZF-ELRU-RN5I-RCkV-E6ZE-TFZhn0', 'scsi-0QEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b', 'scsi-SQEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5', 'scsi-SQEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546521 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.546525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-05 01:03:04.546548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part1', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part14', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part15', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part16', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bb27c3c1--5e00--588a--af48--66c3e9a20c72-osd--block--bb27c3c1--5e00--588a--af48--66c3e9a20c72'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NdTYuF-6z14-ZW1D-7Z0k-Kg9t-W74X-gW7nVL', 'scsi-0QEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27', 'scsi-SQEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--52eeae7c--0ac3--5716--aafe--40e466221a22-osd--block--52eeae7c--0ac3--5716--aafe--40e466221a22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WrdaDs-mwcO-AhgX-fS5E-xeBY-IK1o-ejxiDn', 'scsi-0QEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5', 'scsi-SQEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1', 'scsi-SQEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-05 01:03:04.546585 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.546589 | orchestrator | 2026-03-05 01:03:04.546594 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-05 01:03:04.546599 | orchestrator | Thursday 05 March 2026 01:01:06 +0000 (0:00:00.555) 0:00:17.999 ******** 2026-03-05 01:03:04.546604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e61642d--a609--5f4c--883e--a16b698ed397-osd--block--8e61642d--a609--5f4c--883e--a16b698ed397', 'dm-uuid-LVM-LbLRM4MoU7LrtCpLRhZ98aBrXC5CKd9TorD81YopypD0x28jJAK8Hq9clofUSZiz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1a9c38f8--c56f--5625--8ade--2e45962405d2-osd--block--1a9c38f8--c56f--5625--8ade--2e45962405d2', 'dm-uuid-LVM-Awd6JFTEZhabPZZ269I3lfUatL84usmfNrJzp1u0OfKxZ9ov2M1W0FL1CTfuuxfS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546623 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546665 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--487cf15b--a3c4--55bb--8565--d1e78d85d824-osd--block--487cf15b--a3c4--55bb--8565--d1e78d85d824', 'dm-uuid-LVM-Dt8XNnSQe3wlln96iskXeizrfvxQBhuXH3Sg7aJ4PaS3fhgGCsS4rDsTtuxSPKbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546704 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04f48836--d47d--5181--a61a--7e2c62572595-osd--block--04f48836--d47d--5181--a61a--7e2c62572595', 'dm-uuid-LVM-OJN9tS92YMA7b805RALhO0UBIRFqsk88oV19gmqjAodf7KHfSG0FCr1O8vHcprn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_7818bcf6-78f1-48ba-b92b-b536ad3835fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8e61642d--a609--5f4c--883e--a16b698ed397-osd--block--8e61642d--a609--5f4c--883e--a16b698ed397'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AFl9q1-L64n-Gj7c-kBPf-4pLx-6hdv-2dXo3s', 'scsi-0QEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4', 'scsi-SQEMU_QEMU_HARDDISK_7a81eef9-1ec7-478e-bff8-8c3b6c97c0d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546733 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546738 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1a9c38f8--c56f--5625--8ade--2e45962405d2-osd--block--1a9c38f8--c56f--5625--8ade--2e45962405d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B01kgl-LOeW-EjUU-UANj-Hb1R-VO9H-0ZSNyu', 'scsi-0QEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b', 'scsi-SQEMU_QEMU_HARDDISK_1cde8d38-c9d3-4512-8106-c139834ff42b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada', 'scsi-SQEMU_QEMU_HARDDISK_e9fbedff-eb29-4e1b-a232-9476e4a5bada'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546749 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546770 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546777 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.546782 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bb27c3c1--5e00--588a--af48--66c3e9a20c72-osd--block--bb27c3c1--5e00--588a--af48--66c3e9a20c72', 'dm-uuid-LVM-aAhEHT9pjwGSpfrIrtjDtv5kGox94UV3Hcd8aIBrz2VbIQnyCRFrxK1WBmY4wZuT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546790 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52eeae7c--0ac3--5716--aafe--40e466221a22-osd--block--52eeae7c--0ac3--5716--aafe--40e466221a22', 'dm-uuid-LVM-yisYFX54apoGhi6gycsqiSU5w2pvRttzJJr37NcZ9qiTzIf7Tb0paCfHpcE4eNSQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546823 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546829 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cf23377a-f42e-406a-8eb8-34ba52ccfac6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546862 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--487cf15b--a3c4--55bb--8565--d1e78d85d824-osd--block--487cf15b--a3c4--55bb--8565--d1e78d85d824'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2gyAHZ-CD5F-8jUg-pmWW-VCFj-v7X8-fe5qeY', 'scsi-0QEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980', 'scsi-SQEMU_QEMU_HARDDISK_9c8197fe-cfc6-470d-b43f-168fdfa4c980'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--04f48836--d47d--5181--a61a--7e2c62572595-osd--block--04f48836--d47d--5181--a61a--7e2c62572595'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PS3SUM-PYZF-ELRU-RN5I-RCkV-E6ZE-TFZhn0', 'scsi-0QEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b', 'scsi-SQEMU_QEMU_HARDDISK_bc7e009b-77b4-429d-819f-0751386ded0b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546904 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5', 'scsi-SQEMU_QEMU_HARDDISK_c272dc3f-f5b6-4857-91f2-561a599f15b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546929 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.546936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part1', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part14', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part15', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part16', 'scsi-SQEMU_QEMU_HARDDISK_9920fd12-02dd-4b62-9dd4-bd789f1a1f90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bb27c3c1--5e00--588a--af48--66c3e9a20c72-osd--block--bb27c3c1--5e00--588a--af48--66c3e9a20c72'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NdTYuF-6z14-ZW1D-7Z0k-Kg9t-W74X-gW7nVL', 'scsi-0QEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27', 'scsi-SQEMU_QEMU_HARDDISK_177e9830-d762-48d2-8720-88dd872b3a27'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--52eeae7c--0ac3--5716--aafe--40e466221a22-osd--block--52eeae7c--0ac3--5716--aafe--40e466221a22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WrdaDs-mwcO-AhgX-fS5E-xeBY-IK1o-ejxiDn', 'scsi-0QEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5', 'scsi-SQEMU_QEMU_HARDDISK_80e7620b-1c7d-40ff-852b-40246feca9c5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546977 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1', 'scsi-SQEMU_QEMU_HARDDISK_886d7f4d-c342-4547-93ea-f5198c18b4a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.546990 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-05 01:03:04.547001 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.547007 | orchestrator | 2026-03-05 01:03:04.547016 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-05 01:03:04.547020 | orchestrator | Thursday 05 March 2026 01:01:06 +0000 (0:00:00.631) 0:00:18.630 ******** 2026-03-05 01:03:04.547024 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.547029 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.547032 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.547037 | orchestrator | 2026-03-05 01:03:04.547041 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-05 01:03:04.547045 | orchestrator | Thursday 05 March 2026 01:01:07 +0000 (0:00:00.711) 0:00:19.342 ******** 2026-03-05 01:03:04.547049 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.547053 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.547057 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.547061 | orchestrator | 2026-03-05 01:03:04.547065 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-05 01:03:04.547069 | orchestrator | Thursday 05 March 2026 01:01:08 +0000 (0:00:00.558) 0:00:19.900 ******** 2026-03-05 01:03:04.547073 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.547077 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.547081 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.547085 | orchestrator | 2026-03-05 01:03:04.547089 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-05 01:03:04.547093 | orchestrator | Thursday 05 March 2026 01:01:09 +0000 (0:00:01.490) 0:00:21.390 ******** 2026-03-05 01:03:04.547097 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547101 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.547105 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.547109 | orchestrator | 2026-03-05 01:03:04.547113 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-05 01:03:04.547117 | orchestrator | Thursday 05 March 2026 01:01:09 +0000 (0:00:00.313) 0:00:21.704 ******** 2026-03-05 01:03:04.547120 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547124 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.547128 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.547132 | orchestrator | 2026-03-05 01:03:04.547164 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-05 01:03:04.547168 | orchestrator | Thursday 05 March 2026 01:01:10 +0000 (0:00:00.454) 0:00:22.158 ******** 2026-03-05 01:03:04.547173 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547177 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.547181 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.547184 | orchestrator | 2026-03-05 01:03:04.547188 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-05 01:03:04.547192 | orchestrator | Thursday 05 March 2026 01:01:10 +0000 (0:00:00.561) 0:00:22.719 ******** 2026-03-05 01:03:04.547196 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-05 01:03:04.547201 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-05 01:03:04.547205 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-05 01:03:04.547209 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-05 01:03:04.547213 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-05 01:03:04.547217 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-05 01:03:04.547221 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-05 01:03:04.547229 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-05 01:03:04.547233 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-05 01:03:04.547237 | orchestrator | 2026-03-05 01:03:04.547241 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-05 01:03:04.547245 | orchestrator | Thursday 05 March 2026 01:01:11 +0000 (0:00:00.865) 0:00:23.585 ******** 2026-03-05 01:03:04.547249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-05 01:03:04.547253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-05 01:03:04.547257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-05 01:03:04.547261 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547265 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-05 01:03:04.547269 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-05 01:03:04.547273 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-05 01:03:04.547277 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.547283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-05 01:03:04.547289 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-05 01:03:04.547296 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-05 01:03:04.547301 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.547307 | orchestrator | 2026-03-05 01:03:04.547313 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-05 01:03:04.547318 | orchestrator | Thursday 05 March 2026 01:01:12 +0000 (0:00:00.394) 0:00:23.980 ******** 2026-03-05 01:03:04.547325 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:03:04.547331 | orchestrator | 2026-03-05 01:03:04.547338 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-05 01:03:04.547345 | orchestrator | Thursday 05 March 2026 01:01:12 +0000 (0:00:00.764) 0:00:24.745 ******** 2026-03-05 01:03:04.547355 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547362 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.547369 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.547376 | orchestrator | 2026-03-05 01:03:04.547388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-05 01:03:04.547394 | orchestrator | Thursday 05 March 2026 01:01:13 +0000 (0:00:00.390) 0:00:25.135 ******** 2026-03-05 01:03:04.547399 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547405 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.547410 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.547417 | orchestrator | 2026-03-05 01:03:04.547423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-05 01:03:04.547430 | orchestrator | Thursday 05 March 2026 01:01:13 +0000 (0:00:00.323) 0:00:25.458 ******** 2026-03-05 01:03:04.547435 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547441 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.547447 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:03:04.547453 | orchestrator | 2026-03-05 01:03:04.547460 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-05 01:03:04.547467 | orchestrator | Thursday 05 March 2026 01:01:14 +0000 (0:00:00.340) 0:00:25.798 ******** 2026-03-05 01:03:04.547473 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.547479 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.547485 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.547491 | orchestrator | 2026-03-05 01:03:04.547497 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-05 01:03:04.547503 | orchestrator | Thursday 05 March 2026 01:01:15 +0000 (0:00:00.998) 0:00:26.797 ******** 2026-03-05 01:03:04.547509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:03:04.547515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:03:04.547527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:03:04.547534 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547540 | orchestrator | 2026-03-05 01:03:04.547546 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-05 01:03:04.547551 | orchestrator | Thursday 05 March 2026 01:01:15 +0000 (0:00:00.481) 0:00:27.279 ******** 2026-03-05 01:03:04.547557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:03:04.547563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:03:04.547569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:03:04.547576 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547582 | orchestrator | 2026-03-05 01:03:04.547589 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-05 01:03:04.547595 | orchestrator | Thursday 05 March 2026 01:01:15 +0000 (0:00:00.470) 0:00:27.749 ******** 2026-03-05 01:03:04.547601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-05 01:03:04.547610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-05 01:03:04.547614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-05 01:03:04.547618 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547624 | orchestrator | 2026-03-05 01:03:04.547630 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-05 01:03:04.547636 | orchestrator | Thursday 05 March 2026 01:01:16 +0000 (0:00:00.381) 0:00:28.131 ******** 2026-03-05 01:03:04.547642 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:03:04.547648 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:03:04.547654 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:03:04.547660 | orchestrator | 2026-03-05 01:03:04.547667 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-05 01:03:04.547673 | orchestrator | Thursday 05 March 2026 01:01:16 +0000 (0:00:00.326) 0:00:28.457 ******** 2026-03-05 01:03:04.547680 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-05 01:03:04.547687 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-05 01:03:04.547693 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-05 01:03:04.547700 | orchestrator | 2026-03-05 01:03:04.547706 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-05 01:03:04.547712 | orchestrator | Thursday 05 March 2026 01:01:17 +0000 (0:00:00.517) 0:00:28.975 ******** 2026-03-05 01:03:04.547719 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:03:04.547725 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:03:04.547733 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:03:04.547738 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-05 01:03:04.547745 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-05 01:03:04.547751 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-05 01:03:04.547758 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-05 01:03:04.547764 | orchestrator | 2026-03-05 01:03:04.547771 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-05 01:03:04.547777 | orchestrator | Thursday 05 March 2026 01:01:18 +0000 (0:00:01.117) 0:00:30.092 ******** 2026-03-05 01:03:04.547783 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-05 01:03:04.547789 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-05 01:03:04.547796 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-05 01:03:04.547802 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-05 01:03:04.547809 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-05 01:03:04.547822 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-05 01:03:04.547836 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-05 01:03:04.547842 | orchestrator | 2026-03-05 01:03:04.547849 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-05 01:03:04.547861 | orchestrator | Thursday 05 March 2026 01:01:20 +0000 (0:00:02.026) 0:00:32.119 ******** 2026-03-05 01:03:04.547867 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:03:04.547874 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:03:04.547881 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-05 01:03:04.547887 | orchestrator | 2026-03-05 01:03:04.547893 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-05 01:03:04.547899 | orchestrator | Thursday 05 March 2026 01:01:20 +0000 (0:00:00.399) 0:00:32.518 ******** 2026-03-05 01:03:04.547906 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:03:04.547914 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:03:04.547918 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:03:04.547922 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:03:04.547926 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-05 01:03:04.547930 | orchestrator | 2026-03-05 01:03:04.547934 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-05 01:03:04.547938 | orchestrator | Thursday 05 March 2026 01:02:06 +0000 (0:00:46.203) 0:01:18.722 ******** 2026-03-05 01:03:04.547978 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.547984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.547988 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.547991 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.547995 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.547999 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548003 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-05 01:03:04.548007 | orchestrator | 2026-03-05 01:03:04.548011 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-05 01:03:04.548015 | orchestrator | Thursday 05 March 2026 01:02:32 +0000 (0:00:25.194) 0:01:43.917 ******** 2026-03-05 01:03:04.548019 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548027 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548031 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548035 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548039 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548043 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548047 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-05 01:03:04.548051 | orchestrator | 2026-03-05 01:03:04.548055 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-05 01:03:04.548058 | orchestrator | Thursday 05 March 2026 01:02:45 +0000 (0:00:12.851) 0:01:56.768 ******** 2026-03-05 01:03:04.548062 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548066 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:03:04.548070 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:03:04.548074 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548078 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:03:04.548087 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:03:04.548094 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548098 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:03:04.548102 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:03:04.548106 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548110 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:03:04.548113 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:03:04.548118 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548122 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:03:04.548125 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:03:04.548130 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-05 01:03:04.548149 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-05 01:03:04.548153 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-05 01:03:04.548158 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-05 01:03:04.548161 | orchestrator | 2026-03-05 01:03:04.548165 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:03:04.548169 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-05 01:03:04.548175 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-05 01:03:04.548180 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-05 01:03:04.548184 | orchestrator | 2026-03-05 01:03:04.548188 | orchestrator | 2026-03-05 01:03:04.548192 | orchestrator | 2026-03-05 01:03:04.548195 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:03:04.548200 | orchestrator | Thursday 05 March 2026 01:03:03 +0000 (0:00:18.196) 0:02:14.965 ******** 2026-03-05 01:03:04.548204 | orchestrator | =============================================================================== 2026-03-05 01:03:04.548213 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.20s 2026-03-05 01:03:04.548217 | orchestrator | generate keys ---------------------------------------------------------- 25.19s 2026-03-05 01:03:04.548221 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.20s 2026-03-05 01:03:04.548225 | orchestrator | get keys from monitors ------------------------------------------------- 12.85s 2026-03-05 01:03:04.548229 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.17s 2026-03-05 01:03:04.548232 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.03s 2026-03-05 01:03:04.548237 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.72s 2026-03-05 01:03:04.548241 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.49s 2026-03-05 01:03:04.548245 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.12s 2026-03-05 01:03:04.548248 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 1.00s 2026-03-05 01:03:04.548252 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.94s 2026-03-05 01:03:04.548256 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.89s 2026-03-05 01:03:04.548260 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2026-03-05 01:03:04.548264 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.76s 2026-03-05 01:03:04.548268 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2026-03-05 01:03:04.548272 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.68s 2026-03-05 01:03:04.548276 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.67s 2026-03-05 01:03:04.548280 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2026-03-05 01:03:04.548284 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.63s 2026-03-05 01:03:04.548287 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2026-03-05 01:03:04.548291 | orchestrator | 2026-03-05 01:03:04 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:03:04.548296 | orchestrator | 2026-03-05 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:07.588824 | orchestrator | 2026-03-05 01:03:07 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:07.590965 | orchestrator | 2026-03-05 01:03:07 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:07.593229 | orchestrator | 2026-03-05 01:03:07 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:03:07.593987 | orchestrator | 2026-03-05 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:10.636986 | orchestrator | 2026-03-05 01:03:10 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:10.640291 | orchestrator | 2026-03-05 01:03:10 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:10.643592 | orchestrator | 2026-03-05 01:03:10 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state STARTED 2026-03-05 01:03:10.643670 | orchestrator | 2026-03-05 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:13.687633 | orchestrator | 2026-03-05 01:03:13 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:13.691219 | orchestrator | 2026-03-05 01:03:13 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:13.693725 | orchestrator | 2026-03-05 01:03:13 | INFO  | Task 46bf9fed-e38a-4a61-acd5-ec072c095655 is in state SUCCESS 2026-03-05 01:03:13.694375 | orchestrator | 2026-03-05 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:13.695672 | orchestrator | 2026-03-05 01:03:13.695718 | orchestrator | 2026-03-05 01:03:13.695727 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:03:13.695735 | orchestrator | 2026-03-05 01:03:13.695743 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:03:13.695750 | orchestrator | Thursday 05 March 2026 01:01:37 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-03-05 01:03:13.695759 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.695767 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.695774 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.695781 | orchestrator | 2026-03-05 01:03:13.695788 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:03:13.695794 | orchestrator | Thursday 05 March 2026 01:01:37 +0000 (0:00:00.296) 0:00:00.575 ******** 2026-03-05 01:03:13.695801 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-05 01:03:13.695808 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-05 01:03:13.695814 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-05 01:03:13.695821 | orchestrator | 2026-03-05 01:03:13.695828 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-05 01:03:13.695834 | orchestrator | 2026-03-05 01:03:13.695841 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:03:13.695847 | orchestrator | Thursday 05 March 2026 01:01:37 +0000 (0:00:00.437) 0:00:01.013 ******** 2026-03-05 01:03:13.695854 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:03:13.695861 | orchestrator | 2026-03-05 01:03:13.695867 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-05 01:03:13.695884 | orchestrator | Thursday 05 March 2026 01:01:38 +0000 (0:00:00.543) 0:00:01.556 ******** 2026-03-05 01:03:13.695916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:03:13.695958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:03:13.695975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:03:13.695989 | orchestrator | 2026-03-05 01:03:13.695996 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-05 01:03:13.696003 | orchestrator | Thursday 05 March 2026 01:01:39 +0000 (0:00:01.084) 0:00:02.640 ******** 2026-03-05 01:03:13.696010 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.696017 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.696024 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.696030 | orchestrator | 2026-03-05 01:03:13.696037 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:03:13.696042 | orchestrator | Thursday 05 March 2026 01:01:40 +0000 (0:00:00.614) 0:00:03.255 ******** 2026-03-05 01:03:13.696051 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-05 01:03:13.696388 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-05 01:03:13.696400 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-05 01:03:13.696407 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-05 01:03:13.696415 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-05 01:03:13.696422 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-05 01:03:13.696430 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-05 01:03:13.696438 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-05 01:03:13.696445 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-05 01:03:13.696454 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-05 01:03:13.696468 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-05 01:03:13.696477 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-05 01:03:13.696485 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-05 01:03:13.696493 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-05 01:03:13.696501 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-05 01:03:13.696509 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-05 01:03:13.696516 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-05 01:03:13.696524 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-05 01:03:13.696531 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-05 01:03:13.696539 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-05 01:03:13.696547 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-05 01:03:13.696554 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-05 01:03:13.696562 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-05 01:03:13.696570 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-05 01:03:13.696579 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-05 01:03:13.696599 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-05 01:03:13.696607 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-05 01:03:13.696615 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-05 01:03:13.696623 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-05 01:03:13.696631 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-05 01:03:13.696638 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-05 01:03:13.696653 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-05 01:03:13.696661 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-05 01:03:13.696670 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-05 01:03:13.696677 | orchestrator | 2026-03-05 01:03:13.696685 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.696693 | orchestrator | Thursday 05 March 2026 01:01:40 +0000 (0:00:00.751) 0:00:04.007 ******** 2026-03-05 01:03:13.696702 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.696710 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.696718 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.696725 | orchestrator | 2026-03-05 01:03:13.696734 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.696742 | orchestrator | Thursday 05 March 2026 01:01:41 +0000 (0:00:00.316) 0:00:04.323 ******** 2026-03-05 01:03:13.696756 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.696767 | orchestrator | 2026-03-05 01:03:13.696775 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.696783 | orchestrator | Thursday 05 March 2026 01:01:41 +0000 (0:00:00.131) 0:00:04.454 ******** 2026-03-05 01:03:13.696790 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.696799 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.696807 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.696815 | orchestrator | 2026-03-05 01:03:13.696823 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.696831 | orchestrator | Thursday 05 March 2026 01:01:41 +0000 (0:00:00.464) 0:00:04.919 ******** 2026-03-05 01:03:13.696838 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.696846 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.696855 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.696863 | orchestrator | 2026-03-05 01:03:13.696870 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.696878 | orchestrator | Thursday 05 March 2026 01:01:41 +0000 (0:00:00.311) 0:00:05.231 ******** 2026-03-05 01:03:13.696886 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.696894 | orchestrator | 2026-03-05 01:03:13.696902 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.696909 | orchestrator | Thursday 05 March 2026 01:01:42 +0000 (0:00:00.136) 0:00:05.368 ******** 2026-03-05 01:03:13.696916 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.696924 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.696932 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.696964 | orchestrator | 2026-03-05 01:03:13.696976 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.696984 | orchestrator | Thursday 05 March 2026 01:01:42 +0000 (0:00:00.300) 0:00:05.668 ******** 2026-03-05 01:03:13.696992 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.697000 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.697007 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.697015 | orchestrator | 2026-03-05 01:03:13.697020 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.697027 | orchestrator | Thursday 05 March 2026 01:01:42 +0000 (0:00:00.340) 0:00:06.009 ******** 2026-03-05 01:03:13.697034 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697042 | orchestrator | 2026-03-05 01:03:13.697050 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.697057 | orchestrator | Thursday 05 March 2026 01:01:43 +0000 (0:00:00.357) 0:00:06.366 ******** 2026-03-05 01:03:13.697066 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697074 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.697081 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.697089 | orchestrator | 2026-03-05 01:03:13.697097 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.697106 | orchestrator | Thursday 05 March 2026 01:01:43 +0000 (0:00:00.285) 0:00:06.651 ******** 2026-03-05 01:03:13.697114 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.697122 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.697130 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.697164 | orchestrator | 2026-03-05 01:03:13.697173 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.697182 | orchestrator | Thursday 05 March 2026 01:01:43 +0000 (0:00:00.356) 0:00:07.008 ******** 2026-03-05 01:03:13.697190 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697197 | orchestrator | 2026-03-05 01:03:13.697205 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.697213 | orchestrator | Thursday 05 March 2026 01:01:43 +0000 (0:00:00.127) 0:00:07.135 ******** 2026-03-05 01:03:13.697221 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697229 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.697239 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.697246 | orchestrator | 2026-03-05 01:03:13.697254 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.697264 | orchestrator | Thursday 05 March 2026 01:01:44 +0000 (0:00:00.311) 0:00:07.447 ******** 2026-03-05 01:03:13.697271 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.697279 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.697287 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.697296 | orchestrator | 2026-03-05 01:03:13.697303 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.697310 | orchestrator | Thursday 05 March 2026 01:01:44 +0000 (0:00:00.496) 0:00:07.943 ******** 2026-03-05 01:03:13.697316 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697322 | orchestrator | 2026-03-05 01:03:13.697329 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.697337 | orchestrator | Thursday 05 March 2026 01:01:44 +0000 (0:00:00.169) 0:00:08.113 ******** 2026-03-05 01:03:13.697346 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697360 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.697369 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.697377 | orchestrator | 2026-03-05 01:03:13.697386 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.697394 | orchestrator | Thursday 05 March 2026 01:01:45 +0000 (0:00:00.312) 0:00:08.425 ******** 2026-03-05 01:03:13.697401 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.697409 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.697417 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.697426 | orchestrator | 2026-03-05 01:03:13.697441 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.697450 | orchestrator | Thursday 05 March 2026 01:01:45 +0000 (0:00:00.347) 0:00:08.773 ******** 2026-03-05 01:03:13.697457 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697466 | orchestrator | 2026-03-05 01:03:13.697473 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.697481 | orchestrator | Thursday 05 March 2026 01:01:45 +0000 (0:00:00.126) 0:00:08.899 ******** 2026-03-05 01:03:13.697489 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697497 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.697505 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.697513 | orchestrator | 2026-03-05 01:03:13.697521 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.697538 | orchestrator | Thursday 05 March 2026 01:01:45 +0000 (0:00:00.326) 0:00:09.226 ******** 2026-03-05 01:03:13.697547 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.697555 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.697564 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.697572 | orchestrator | 2026-03-05 01:03:13.697581 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.697590 | orchestrator | Thursday 05 March 2026 01:01:46 +0000 (0:00:00.583) 0:00:09.809 ******** 2026-03-05 01:03:13.697598 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697607 | orchestrator | 2026-03-05 01:03:13.697616 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.697624 | orchestrator | Thursday 05 March 2026 01:01:46 +0000 (0:00:00.120) 0:00:09.930 ******** 2026-03-05 01:03:13.697633 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697640 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.697649 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.697657 | orchestrator | 2026-03-05 01:03:13.697666 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.697675 | orchestrator | Thursday 05 March 2026 01:01:46 +0000 (0:00:00.285) 0:00:10.215 ******** 2026-03-05 01:03:13.697683 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.697691 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.697699 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.697707 | orchestrator | 2026-03-05 01:03:13.697716 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.697726 | orchestrator | Thursday 05 March 2026 01:01:47 +0000 (0:00:00.302) 0:00:10.518 ******** 2026-03-05 01:03:13.697735 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697744 | orchestrator | 2026-03-05 01:03:13.697752 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.697761 | orchestrator | Thursday 05 March 2026 01:01:47 +0000 (0:00:00.144) 0:00:10.662 ******** 2026-03-05 01:03:13.697769 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697777 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.697785 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.697792 | orchestrator | 2026-03-05 01:03:13.697799 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.697806 | orchestrator | Thursday 05 March 2026 01:01:47 +0000 (0:00:00.519) 0:00:11.182 ******** 2026-03-05 01:03:13.697813 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.697820 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.697828 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.697836 | orchestrator | 2026-03-05 01:03:13.697844 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.697851 | orchestrator | Thursday 05 March 2026 01:01:48 +0000 (0:00:00.372) 0:00:11.555 ******** 2026-03-05 01:03:13.697859 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697867 | orchestrator | 2026-03-05 01:03:13.697875 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.697883 | orchestrator | Thursday 05 March 2026 01:01:48 +0000 (0:00:00.142) 0:00:11.697 ******** 2026-03-05 01:03:13.697898 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697906 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.697914 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.697922 | orchestrator | 2026-03-05 01:03:13.697930 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-05 01:03:13.697936 | orchestrator | Thursday 05 March 2026 01:01:48 +0000 (0:00:00.310) 0:00:12.008 ******** 2026-03-05 01:03:13.697942 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:03:13.697947 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:03:13.697953 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:03:13.697959 | orchestrator | 2026-03-05 01:03:13.697965 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-05 01:03:13.697971 | orchestrator | Thursday 05 March 2026 01:01:49 +0000 (0:00:00.317) 0:00:12.325 ******** 2026-03-05 01:03:13.697977 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.697982 | orchestrator | 2026-03-05 01:03:13.697988 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-05 01:03:13.697995 | orchestrator | Thursday 05 March 2026 01:01:49 +0000 (0:00:00.164) 0:00:12.490 ******** 2026-03-05 01:03:13.698002 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.698009 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.698093 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.698103 | orchestrator | 2026-03-05 01:03:13.698111 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-05 01:03:13.698119 | orchestrator | Thursday 05 March 2026 01:01:49 +0000 (0:00:00.577) 0:00:13.068 ******** 2026-03-05 01:03:13.698127 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:03:13.698150 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:03:13.698157 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:03:13.698162 | orchestrator | 2026-03-05 01:03:13.698169 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-05 01:03:13.698182 | orchestrator | Thursday 05 March 2026 01:01:51 +0000 (0:00:01.720) 0:00:14.788 ******** 2026-03-05 01:03:13.698189 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-05 01:03:13.698198 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-05 01:03:13.698205 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-05 01:03:13.698212 | orchestrator | 2026-03-05 01:03:13.698220 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-05 01:03:13.698227 | orchestrator | Thursday 05 March 2026 01:01:53 +0000 (0:00:01.921) 0:00:16.710 ******** 2026-03-05 01:03:13.698234 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-05 01:03:13.698243 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-05 01:03:13.698250 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-05 01:03:13.698258 | orchestrator | 2026-03-05 01:03:13.698275 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-05 01:03:13.698283 | orchestrator | Thursday 05 March 2026 01:01:55 +0000 (0:00:02.411) 0:00:19.121 ******** 2026-03-05 01:03:13.698290 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-05 01:03:13.698298 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-05 01:03:13.698304 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-05 01:03:13.698310 | orchestrator | 2026-03-05 01:03:13.698316 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-05 01:03:13.698322 | orchestrator | Thursday 05 March 2026 01:01:57 +0000 (0:00:01.910) 0:00:21.032 ******** 2026-03-05 01:03:13.698336 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.698343 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.698350 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.698360 | orchestrator | 2026-03-05 01:03:13.698367 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-05 01:03:13.698375 | orchestrator | Thursday 05 March 2026 01:01:58 +0000 (0:00:00.293) 0:00:21.326 ******** 2026-03-05 01:03:13.698382 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.698391 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.698399 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.698406 | orchestrator | 2026-03-05 01:03:13.698413 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:03:13.698420 | orchestrator | Thursday 05 March 2026 01:01:58 +0000 (0:00:00.325) 0:00:21.651 ******** 2026-03-05 01:03:13.698428 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:03:13.698435 | orchestrator | 2026-03-05 01:03:13.698442 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-05 01:03:13.698449 | orchestrator | Thursday 05 March 2026 01:01:59 +0000 (0:00:00.703) 0:00:22.355 ******** 2026-03-05 01:03:13.698468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:03:13.698490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:03:13.698512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:03:13.698520 | orchestrator | 2026-03-05 01:03:13.698527 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-05 01:03:13.698535 | orchestrator | Thursday 05 March 2026 01:02:00 +0000 (0:00:01.440) 0:00:23.795 ******** 2026-03-05 01:03:13.698550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:03:13.698563 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.698580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:03:13.698593 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.698601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:03:13.698609 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.698617 | orchestrator | 2026-03-05 01:03:13.698624 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-05 01:03:13.698632 | orchestrator | Thursday 05 March 2026 01:02:01 +0000 (0:00:00.644) 0:00:24.439 ******** 2026-03-05 01:03:13.698649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:03:13.698664 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.698673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:03:13.698681 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.698697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-05 01:03:13.698715 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.698722 | orchestrator | 2026-03-05 01:03:13.698729 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-05 01:03:13.698736 | orchestrator | Thursday 05 March 2026 01:02:02 +0000 (0:00:00.818) 0:00:25.258 ******** 2026-03-05 01:03:13.698748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:03:13.698763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:03:13.698785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-05 01:03:13.698800 | orchestrator | 2026-03-05 01:03:13.698808 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:03:13.698818 | orchestrator | Thursday 05 March 2026 01:02:03 +0000 (0:00:01.296) 0:00:26.554 ******** 2026-03-05 01:03:13.698825 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:03:13.698832 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:03:13.698839 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:03:13.698847 | orchestrator | 2026-03-05 01:03:13.698854 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-05 01:03:13.698866 | orchestrator | Thursday 05 March 2026 01:02:03 +0000 (0:00:00.297) 0:00:26.852 ******** 2026-03-05 01:03:13.698873 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:03:13.698881 | orchestrator | 2026-03-05 01:03:13.698888 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-05 01:03:13.698895 | orchestrator | Thursday 05 March 2026 01:02:04 +0000 (0:00:00.489) 0:00:27.342 ******** 2026-03-05 01:03:13.698903 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:03:13.698910 | orchestrator | 2026-03-05 01:03:13.698917 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-05 01:03:13.698925 | orchestrator | Thursday 05 March 2026 01:02:06 +0000 (0:00:02.545) 0:00:29.887 ******** 2026-03-05 01:03:13.698932 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:03:13.698939 | orchestrator | 2026-03-05 01:03:13.698946 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-05 01:03:13.698954 | orchestrator | Thursday 05 March 2026 01:02:09 +0000 (0:00:02.895) 0:00:32.783 ******** 2026-03-05 01:03:13.698961 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:03:13.698968 | orchestrator | 2026-03-05 01:03:13.698976 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-05 01:03:13.698983 | orchestrator | Thursday 05 March 2026 01:02:27 +0000 (0:00:17.507) 0:00:50.290 ******** 2026-03-05 01:03:13.698991 | orchestrator | 2026-03-05 01:03:13.698998 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-05 01:03:13.699006 | orchestrator | Thursday 05 March 2026 01:02:27 +0000 (0:00:00.060) 0:00:50.351 ******** 2026-03-05 01:03:13.699013 | orchestrator | 2026-03-05 01:03:13.699020 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-05 01:03:13.699026 | orchestrator | Thursday 05 March 2026 01:02:27 +0000 (0:00:00.060) 0:00:50.411 ******** 2026-03-05 01:03:13.699033 | orchestrator | 2026-03-05 01:03:13.699040 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-05 01:03:13.699048 | orchestrator | Thursday 05 March 2026 01:02:27 +0000 (0:00:00.062) 0:00:50.474 ******** 2026-03-05 01:03:13.699056 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:03:13.699063 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:03:13.699070 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:03:13.699077 | orchestrator | 2026-03-05 01:03:13.699084 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:03:13.699093 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-05 01:03:13.699104 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-05 01:03:13.699112 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-05 01:03:13.699119 | orchestrator | 2026-03-05 01:03:13.699126 | orchestrator | 2026-03-05 01:03:13.699185 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:03:13.699194 | orchestrator | Thursday 05 March 2026 01:03:12 +0000 (0:00:44.768) 0:01:35.243 ******** 2026-03-05 01:03:13.699208 | orchestrator | =============================================================================== 2026-03-05 01:03:13.699214 | orchestrator | horizon : Restart horizon container ------------------------------------ 44.77s 2026-03-05 01:03:13.699220 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.51s 2026-03-05 01:03:13.699226 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.90s 2026-03-05 01:03:13.699233 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.55s 2026-03-05 01:03:13.699240 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.41s 2026-03-05 01:03:13.699247 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.92s 2026-03-05 01:03:13.699254 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.91s 2026-03-05 01:03:13.699261 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.72s 2026-03-05 01:03:13.699268 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.44s 2026-03-05 01:03:13.699275 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.30s 2026-03-05 01:03:13.699283 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.08s 2026-03-05 01:03:13.699291 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.82s 2026-03-05 01:03:13.699298 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2026-03-05 01:03:13.699306 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2026-03-05 01:03:13.699312 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2026-03-05 01:03:13.699319 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.61s 2026-03-05 01:03:13.699325 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2026-03-05 01:03:13.699332 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.58s 2026-03-05 01:03:13.699339 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2026-03-05 01:03:13.699347 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2026-03-05 01:03:16.741831 | orchestrator | 2026-03-05 01:03:16 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:16.743250 | orchestrator | 2026-03-05 01:03:16 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:16.743308 | orchestrator | 2026-03-05 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:19.779780 | orchestrator | 2026-03-05 01:03:19 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:19.782111 | orchestrator | 2026-03-05 01:03:19 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:19.782200 | orchestrator | 2026-03-05 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:22.820932 | orchestrator | 2026-03-05 01:03:22 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:22.825091 | orchestrator | 2026-03-05 01:03:22 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:22.825209 | orchestrator | 2026-03-05 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:25.875197 | orchestrator | 2026-03-05 01:03:25 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:25.877202 | orchestrator | 2026-03-05 01:03:25 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:25.877274 | orchestrator | 2026-03-05 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:28.934674 | orchestrator | 2026-03-05 01:03:28 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:28.936826 | orchestrator | 2026-03-05 01:03:28 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:28.936913 | orchestrator | 2026-03-05 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:31.991339 | orchestrator | 2026-03-05 01:03:31 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:31.992812 | orchestrator | 2026-03-05 01:03:31 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:31.992860 | orchestrator | 2026-03-05 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:35.061963 | orchestrator | 2026-03-05 01:03:35 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:35.062513 | orchestrator | 2026-03-05 01:03:35 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:35.062543 | orchestrator | 2026-03-05 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:38.109471 | orchestrator | 2026-03-05 01:03:38 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:38.111911 | orchestrator | 2026-03-05 01:03:38 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:38.111989 | orchestrator | 2026-03-05 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:41.160247 | orchestrator | 2026-03-05 01:03:41 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:41.162126 | orchestrator | 2026-03-05 01:03:41 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state STARTED 2026-03-05 01:03:41.162207 | orchestrator | 2026-03-05 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:44.214604 | orchestrator | 2026-03-05 01:03:44 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:44.217074 | orchestrator | 2026-03-05 01:03:44 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:03:44.218495 | orchestrator | 2026-03-05 01:03:44 | INFO  | Task 526853f8-07e0-44b4-b19d-c779094da933 is in state SUCCESS 2026-03-05 01:03:44.218634 | orchestrator | 2026-03-05 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:47.266610 | orchestrator | 2026-03-05 01:03:47 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:47.267032 | orchestrator | 2026-03-05 01:03:47 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:03:47.267052 | orchestrator | 2026-03-05 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:50.308305 | orchestrator | 2026-03-05 01:03:50 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:50.309407 | orchestrator | 2026-03-05 01:03:50 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:03:50.309621 | orchestrator | 2026-03-05 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:53.352853 | orchestrator | 2026-03-05 01:03:53 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:53.354588 | orchestrator | 2026-03-05 01:03:53 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:03:53.354638 | orchestrator | 2026-03-05 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:56.396810 | orchestrator | 2026-03-05 01:03:56 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:56.398076 | orchestrator | 2026-03-05 01:03:56 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:03:56.398123 | orchestrator | 2026-03-05 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:03:59.436177 | orchestrator | 2026-03-05 01:03:59 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:03:59.438310 | orchestrator | 2026-03-05 01:03:59 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:03:59.438368 | orchestrator | 2026-03-05 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:02.488558 | orchestrator | 2026-03-05 01:04:02 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:04:02.489496 | orchestrator | 2026-03-05 01:04:02 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:02.489676 | orchestrator | 2026-03-05 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:05.532407 | orchestrator | 2026-03-05 01:04:05 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:04:05.534272 | orchestrator | 2026-03-05 01:04:05 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:05.534333 | orchestrator | 2026-03-05 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:08.575081 | orchestrator | 2026-03-05 01:04:08 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:04:08.576454 | orchestrator | 2026-03-05 01:04:08 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:08.576634 | orchestrator | 2026-03-05 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:11.626812 | orchestrator | 2026-03-05 01:04:11 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:04:11.631379 | orchestrator | 2026-03-05 01:04:11 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:11.631451 | orchestrator | 2026-03-05 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:14.668664 | orchestrator | 2026-03-05 01:04:14 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state STARTED 2026-03-05 01:04:14.670921 | orchestrator | 2026-03-05 01:04:14 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:14.670963 | orchestrator | 2026-03-05 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:17.714198 | orchestrator | 2026-03-05 01:04:17.714326 | orchestrator | 2026-03-05 01:04:17.714354 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-05 01:04:17.714374 | orchestrator | 2026-03-05 01:04:17.714392 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-05 01:04:17.714412 | orchestrator | Thursday 05 March 2026 01:03:08 +0000 (0:00:00.146) 0:00:00.146 ******** 2026-03-05 01:04:17.714432 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-05 01:04:17.714452 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.714576 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.714656 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:04:17.714673 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.715301 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-05 01:04:17.715349 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-05 01:04:17.715402 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-05 01:04:17.715414 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-05 01:04:17.715426 | orchestrator | 2026-03-05 01:04:17.715437 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-05 01:04:17.715449 | orchestrator | Thursday 05 March 2026 01:03:13 +0000 (0:00:05.053) 0:00:05.200 ******** 2026-03-05 01:04:17.715460 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-05 01:04:17.715472 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.715482 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.715493 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:04:17.715505 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.715515 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-05 01:04:17.715527 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-05 01:04:17.715537 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-05 01:04:17.715547 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-05 01:04:17.715557 | orchestrator | 2026-03-05 01:04:17.715567 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-05 01:04:17.715577 | orchestrator | Thursday 05 March 2026 01:03:17 +0000 (0:00:04.631) 0:00:09.831 ******** 2026-03-05 01:04:17.715587 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-05 01:04:17.715598 | orchestrator | 2026-03-05 01:04:17.715608 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-05 01:04:17.715618 | orchestrator | Thursday 05 March 2026 01:03:18 +0000 (0:00:01.046) 0:00:10.878 ******** 2026-03-05 01:04:17.715628 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-05 01:04:17.715638 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.715648 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.715662 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:04:17.715685 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.715704 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-05 01:04:17.715721 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-05 01:04:17.715736 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-05 01:04:17.715752 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-05 01:04:17.715767 | orchestrator | 2026-03-05 01:04:17.715783 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-05 01:04:17.715797 | orchestrator | Thursday 05 March 2026 01:03:31 +0000 (0:00:13.221) 0:00:24.100 ******** 2026-03-05 01:04:17.715810 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-05 01:04:17.715823 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-05 01:04:17.715836 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-05 01:04:17.715851 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-05 01:04:17.715955 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-05 01:04:17.715977 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-05 01:04:17.715992 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-05 01:04:17.716007 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-05 01:04:17.716020 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-05 01:04:17.716035 | orchestrator | 2026-03-05 01:04:17.716050 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-05 01:04:17.716066 | orchestrator | Thursday 05 March 2026 01:03:35 +0000 (0:00:03.372) 0:00:27.472 ******** 2026-03-05 01:04:17.716095 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-05 01:04:17.716113 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.716156 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.716172 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:04:17.716187 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-05 01:04:17.716202 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-05 01:04:17.716217 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-05 01:04:17.716233 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-05 01:04:17.716249 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-05 01:04:17.716265 | orchestrator | 2026-03-05 01:04:17.716281 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:04:17.716298 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:04:17.716316 | orchestrator | 2026-03-05 01:04:17.716333 | orchestrator | 2026-03-05 01:04:17.716345 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:04:17.716355 | orchestrator | Thursday 05 March 2026 01:03:42 +0000 (0:00:07.214) 0:00:34.687 ******** 2026-03-05 01:04:17.716365 | orchestrator | =============================================================================== 2026-03-05 01:04:17.716375 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.22s 2026-03-05 01:04:17.716384 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.21s 2026-03-05 01:04:17.716394 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.05s 2026-03-05 01:04:17.716404 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.63s 2026-03-05 01:04:17.716413 | orchestrator | Check if target directories exist --------------------------------------- 3.37s 2026-03-05 01:04:17.716423 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2026-03-05 01:04:17.716433 | orchestrator | 2026-03-05 01:04:17.716442 | orchestrator | 2026-03-05 01:04:17.716452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:04:17.716462 | orchestrator | 2026-03-05 01:04:17.716472 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:04:17.716481 | orchestrator | Thursday 05 March 2026 01:01:37 +0000 (0:00:00.275) 0:00:00.275 ******** 2026-03-05 01:04:17.716491 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:04:17.716502 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:04:17.716512 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:04:17.716521 | orchestrator | 2026-03-05 01:04:17.716531 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:04:17.716541 | orchestrator | Thursday 05 March 2026 01:01:37 +0000 (0:00:00.332) 0:00:00.607 ******** 2026-03-05 01:04:17.716563 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-05 01:04:17.716573 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-05 01:04:17.716583 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-05 01:04:17.716592 | orchestrator | 2026-03-05 01:04:17.716602 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-05 01:04:17.716612 | orchestrator | 2026-03-05 01:04:17.716621 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:04:17.716631 | orchestrator | Thursday 05 March 2026 01:01:37 +0000 (0:00:00.435) 0:00:01.043 ******** 2026-03-05 01:04:17.716641 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:04:17.716651 | orchestrator | 2026-03-05 01:04:17.716661 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-05 01:04:17.716670 | orchestrator | Thursday 05 March 2026 01:01:38 +0000 (0:00:00.600) 0:00:01.643 ******** 2026-03-05 01:04:17.716737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.716764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.716778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.716798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.716810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.716938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.716962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.716974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.716984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717002 | orchestrator | 2026-03-05 01:04:17.717013 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-05 01:04:17.717023 | orchestrator | Thursday 05 March 2026 01:01:40 +0000 (0:00:01.905) 0:00:03.548 ******** 2026-03-05 01:04:17.717033 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.717043 | orchestrator | 2026-03-05 01:04:17.717052 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-05 01:04:17.717062 | orchestrator | Thursday 05 March 2026 01:01:40 +0000 (0:00:00.138) 0:00:03.687 ******** 2026-03-05 01:04:17.717072 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.717081 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.717091 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.717101 | orchestrator | 2026-03-05 01:04:17.717110 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-05 01:04:17.717141 | orchestrator | Thursday 05 March 2026 01:01:40 +0000 (0:00:00.452) 0:00:04.139 ******** 2026-03-05 01:04:17.717152 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:04:17.717161 | orchestrator | 2026-03-05 01:04:17.717171 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:04:17.717181 | orchestrator | Thursday 05 March 2026 01:01:41 +0000 (0:00:00.821) 0:00:04.961 ******** 2026-03-05 01:04:17.717190 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:04:17.717200 | orchestrator | 2026-03-05 01:04:17.717210 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-05 01:04:17.717219 | orchestrator | Thursday 05 March 2026 01:01:42 +0000 (0:00:00.571) 0:00:05.533 ******** 2026-03-05 01:04:17.717238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.717256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.717267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.717285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717370 | orchestrator | 2026-03-05 01:04:17.717380 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-05 01:04:17.717390 | orchestrator | Thursday 05 March 2026 01:01:46 +0000 (0:00:03.717) 0:00:09.251 ******** 2026-03-05 01:04:17.717400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:04:17.717412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.717429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:04:17.717440 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.717456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:04:17.717480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.717490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:04:17.717500 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.717511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:04:17.717529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.717545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:04:17.717561 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.717571 | orchestrator | 2026-03-05 01:04:17.717581 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-05 01:04:17.717591 | orchestrator | Thursday 05 March 2026 01:01:46 +0000 (0:00:00.575) 0:00:09.826 ******** 2026-03-05 01:04:17.717601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:04:17.717613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.717623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:04:17.717633 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.717650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:04:17.717666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.717683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:04:17.717693 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.717703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:04:17.717714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.717725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:04:17.717735 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.717745 | orchestrator | 2026-03-05 01:04:17.717755 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-05 01:04:17.717769 | orchestrator | Thursday 05 March 2026 01:01:47 +0000 (0:00:00.760) 0:00:10.587 ******** 2026-03-05 01:04:17.717785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.717802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.717814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.717825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.717905 | orchestrator | 2026-03-05 01:04:17.717915 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-05 01:04:17.717925 | orchestrator | Thursday 05 March 2026 01:01:50 +0000 (0:00:03.506) 0:00:14.093 ******** 2026-03-05 01:04:17.717936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.717958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.717975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.717986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.717997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.718007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.718072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.718098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.718109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.718137 | orchestrator | 2026-03-05 01:04:17.718148 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-05 01:04:17.718158 | orchestrator | Thursday 05 March 2026 01:01:56 +0000 (0:00:05.658) 0:00:19.751 ******** 2026-03-05 01:04:17.718168 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:04:17.718178 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:04:17.718188 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:04:17.718197 | orchestrator | 2026-03-05 01:04:17.718207 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-05 01:04:17.718217 | orchestrator | Thursday 05 March 2026 01:01:58 +0000 (0:00:01.476) 0:00:21.228 ******** 2026-03-05 01:04:17.718226 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.718236 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.718246 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.718255 | orchestrator | 2026-03-05 01:04:17.718265 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-05 01:04:17.718275 | orchestrator | Thursday 05 March 2026 01:01:58 +0000 (0:00:00.527) 0:00:21.755 ******** 2026-03-05 01:04:17.718284 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.718294 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.718303 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.718313 | orchestrator | 2026-03-05 01:04:17.718323 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-05 01:04:17.718332 | orchestrator | Thursday 05 March 2026 01:01:58 +0000 (0:00:00.272) 0:00:22.028 ******** 2026-03-05 01:04:17.718342 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.718352 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.718361 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.718371 | orchestrator | 2026-03-05 01:04:17.718380 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-05 01:04:17.718390 | orchestrator | Thursday 05 March 2026 01:01:59 +0000 (0:00:00.423) 0:00:22.452 ******** 2026-03-05 01:04:17.718400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:04:17.718425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.718449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:04:17.718467 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.718493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:04:17.718514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.718531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:04:17.718563 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.718592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-05 01:04:17.718619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-05 01:04:17.718637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-05 01:04:17.718653 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.718669 | orchestrator | 2026-03-05 01:04:17.718683 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:04:17.718701 | orchestrator | Thursday 05 March 2026 01:01:59 +0000 (0:00:00.641) 0:00:23.093 ******** 2026-03-05 01:04:17.718717 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.718733 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.718750 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.718767 | orchestrator | 2026-03-05 01:04:17.718783 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-05 01:04:17.718800 | orchestrator | Thursday 05 March 2026 01:02:00 +0000 (0:00:00.270) 0:00:23.363 ******** 2026-03-05 01:04:17.718811 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-05 01:04:17.718822 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-05 01:04:17.718840 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-05 01:04:17.718850 | orchestrator | 2026-03-05 01:04:17.718860 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-05 01:04:17.718869 | orchestrator | Thursday 05 March 2026 01:02:01 +0000 (0:00:01.616) 0:00:24.979 ******** 2026-03-05 01:04:17.718879 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:04:17.718889 | orchestrator | 2026-03-05 01:04:17.718898 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-05 01:04:17.718908 | orchestrator | Thursday 05 March 2026 01:02:02 +0000 (0:00:00.815) 0:00:25.794 ******** 2026-03-05 01:04:17.718917 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.718927 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.718937 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.718946 | orchestrator | 2026-03-05 01:04:17.718956 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-05 01:04:17.718965 | orchestrator | Thursday 05 March 2026 01:02:03 +0000 (0:00:00.711) 0:00:26.506 ******** 2026-03-05 01:04:17.718975 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-05 01:04:17.718985 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:04:17.718994 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-05 01:04:17.719004 | orchestrator | 2026-03-05 01:04:17.719014 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-05 01:04:17.719023 | orchestrator | Thursday 05 March 2026 01:02:04 +0000 (0:00:00.902) 0:00:27.408 ******** 2026-03-05 01:04:17.719033 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:04:17.719043 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:04:17.719053 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:04:17.719062 | orchestrator | 2026-03-05 01:04:17.719072 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-05 01:04:17.719082 | orchestrator | Thursday 05 March 2026 01:02:04 +0000 (0:00:00.282) 0:00:27.691 ******** 2026-03-05 01:04:17.719091 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-05 01:04:17.719101 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-05 01:04:17.719110 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-05 01:04:17.719157 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-05 01:04:17.719183 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-05 01:04:17.719194 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-05 01:04:17.719204 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-05 01:04:17.719214 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-05 01:04:17.719224 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-05 01:04:17.719233 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-05 01:04:17.719243 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-05 01:04:17.719257 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-05 01:04:17.719267 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-05 01:04:17.719277 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-05 01:04:17.719287 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-05 01:04:17.719303 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:04:17.719313 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:04:17.719323 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:04:17.719333 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:04:17.719342 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:04:17.719352 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:04:17.719362 | orchestrator | 2026-03-05 01:04:17.719371 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-05 01:04:17.719381 | orchestrator | Thursday 05 March 2026 01:02:13 +0000 (0:00:08.905) 0:00:36.597 ******** 2026-03-05 01:04:17.719391 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:04:17.719400 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:04:17.719410 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:04:17.719420 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:04:17.719430 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:04:17.719446 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:04:17.719467 | orchestrator | 2026-03-05 01:04:17.719491 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-05 01:04:17.719508 | orchestrator | Thursday 05 March 2026 01:02:16 +0000 (0:00:02.968) 0:00:39.565 ******** 2026-03-05 01:04:17.719525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.719553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.719581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-05 01:04:17.719612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.719629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.719645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-05 01:04:17.719663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.719690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.719721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-05 01:04:17.719749 | orchestrator | 2026-03-05 01:04:17.719766 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:04:17.719783 | orchestrator | Thursday 05 March 2026 01:02:18 +0000 (0:00:02.604) 0:00:42.169 ******** 2026-03-05 01:04:17.719799 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.719816 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.719832 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.719850 | orchestrator | 2026-03-05 01:04:17.719867 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-05 01:04:17.719883 | orchestrator | Thursday 05 March 2026 01:02:19 +0000 (0:00:00.327) 0:00:42.497 ******** 2026-03-05 01:04:17.719901 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:04:17.719912 | orchestrator | 2026-03-05 01:04:17.719922 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-05 01:04:17.719931 | orchestrator | Thursday 05 March 2026 01:02:21 +0000 (0:00:02.492) 0:00:44.989 ******** 2026-03-05 01:04:17.719941 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:04:17.719951 | orchestrator | 2026-03-05 01:04:17.719960 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-05 01:04:17.719970 | orchestrator | Thursday 05 March 2026 01:02:24 +0000 (0:00:02.516) 0:00:47.506 ******** 2026-03-05 01:04:17.719980 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:04:17.719989 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:04:17.719999 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:04:17.720009 | orchestrator | 2026-03-05 01:04:17.720018 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-05 01:04:17.720028 | orchestrator | Thursday 05 March 2026 01:02:25 +0000 (0:00:01.175) 0:00:48.682 ******** 2026-03-05 01:04:17.720038 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:04:17.720048 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:04:17.720057 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:04:17.720067 | orchestrator | 2026-03-05 01:04:17.720076 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-05 01:04:17.720086 | orchestrator | Thursday 05 March 2026 01:02:25 +0000 (0:00:00.355) 0:00:49.037 ******** 2026-03-05 01:04:17.720096 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.720105 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.720115 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.720180 | orchestrator | 2026-03-05 01:04:17.720190 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-05 01:04:17.720200 | orchestrator | Thursday 05 March 2026 01:02:26 +0000 (0:00:00.340) 0:00:49.378 ******** 2026-03-05 01:04:17.720210 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:04:17.720220 | orchestrator | 2026-03-05 01:04:17.720230 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-05 01:04:17.720240 | orchestrator | Thursday 05 March 2026 01:02:42 +0000 (0:00:16.704) 0:01:06.083 ******** 2026-03-05 01:04:17.720249 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:04:17.720259 | orchestrator | 2026-03-05 01:04:17.720269 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-05 01:04:17.720279 | orchestrator | Thursday 05 March 2026 01:02:54 +0000 (0:00:11.658) 0:01:17.741 ******** 2026-03-05 01:04:17.720298 | orchestrator | 2026-03-05 01:04:17.720308 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-05 01:04:17.720317 | orchestrator | Thursday 05 March 2026 01:02:54 +0000 (0:00:00.066) 0:01:17.808 ******** 2026-03-05 01:04:17.720327 | orchestrator | 2026-03-05 01:04:17.720337 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-05 01:04:17.720346 | orchestrator | Thursday 05 March 2026 01:02:54 +0000 (0:00:00.067) 0:01:17.875 ******** 2026-03-05 01:04:17.720356 | orchestrator | 2026-03-05 01:04:17.720366 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-05 01:04:17.720376 | orchestrator | Thursday 05 March 2026 01:02:54 +0000 (0:00:00.093) 0:01:17.969 ******** 2026-03-05 01:04:17.720393 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:04:17.720407 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:04:17.720426 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:04:17.720443 | orchestrator | 2026-03-05 01:04:17.720456 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-05 01:04:17.720469 | orchestrator | Thursday 05 March 2026 01:03:05 +0000 (0:00:10.386) 0:01:28.355 ******** 2026-03-05 01:04:17.720482 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:04:17.720495 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:04:17.720507 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:04:17.720519 | orchestrator | 2026-03-05 01:04:17.720538 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-05 01:04:17.720553 | orchestrator | Thursday 05 March 2026 01:03:10 +0000 (0:00:05.132) 0:01:33.488 ******** 2026-03-05 01:04:17.720566 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:04:17.720579 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:04:17.720592 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:04:17.720605 | orchestrator | 2026-03-05 01:04:17.720618 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:04:17.720632 | orchestrator | Thursday 05 March 2026 01:03:21 +0000 (0:00:11.442) 0:01:44.931 ******** 2026-03-05 01:04:17.720645 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:04:17.720658 | orchestrator | 2026-03-05 01:04:17.720672 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-05 01:04:17.720692 | orchestrator | Thursday 05 March 2026 01:03:22 +0000 (0:00:00.634) 0:01:45.566 ******** 2026-03-05 01:04:17.720706 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:04:17.720715 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:04:17.720723 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:04:17.720731 | orchestrator | 2026-03-05 01:04:17.720739 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-05 01:04:17.720747 | orchestrator | Thursday 05 March 2026 01:03:23 +0000 (0:00:00.706) 0:01:46.272 ******** 2026-03-05 01:04:17.720755 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:04:17.720763 | orchestrator | 2026-03-05 01:04:17.720771 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-05 01:04:17.720779 | orchestrator | Thursday 05 March 2026 01:03:24 +0000 (0:00:01.634) 0:01:47.906 ******** 2026-03-05 01:04:17.720787 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-05 01:04:17.720795 | orchestrator | 2026-03-05 01:04:17.720803 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-05 01:04:17.720811 | orchestrator | Thursday 05 March 2026 01:03:38 +0000 (0:00:13.636) 0:02:01.543 ******** 2026-03-05 01:04:17.720819 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-05 01:04:17.720826 | orchestrator | 2026-03-05 01:04:17.720834 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-05 01:04:17.720842 | orchestrator | Thursday 05 March 2026 01:04:05 +0000 (0:00:27.224) 0:02:28.767 ******** 2026-03-05 01:04:17.720850 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-05 01:04:17.720866 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-05 01:04:17.720874 | orchestrator | 2026-03-05 01:04:17.720882 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-05 01:04:17.720890 | orchestrator | Thursday 05 March 2026 01:04:12 +0000 (0:00:06.532) 0:02:35.300 ******** 2026-03-05 01:04:17.720898 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.720906 | orchestrator | 2026-03-05 01:04:17.720914 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-05 01:04:17.720922 | orchestrator | Thursday 05 March 2026 01:04:12 +0000 (0:00:00.130) 0:02:35.431 ******** 2026-03-05 01:04:17.720930 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.720938 | orchestrator | 2026-03-05 01:04:17.720945 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-05 01:04:17.720953 | orchestrator | Thursday 05 March 2026 01:04:12 +0000 (0:00:00.133) 0:02:35.564 ******** 2026-03-05 01:04:17.720961 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.720969 | orchestrator | 2026-03-05 01:04:17.720977 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-05 01:04:17.720985 | orchestrator | Thursday 05 March 2026 01:04:12 +0000 (0:00:00.153) 0:02:35.718 ******** 2026-03-05 01:04:17.720993 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.721001 | orchestrator | 2026-03-05 01:04:17.721009 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-05 01:04:17.721017 | orchestrator | Thursday 05 March 2026 01:04:13 +0000 (0:00:00.605) 0:02:36.324 ******** 2026-03-05 01:04:17.721029 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:04:17.721048 | orchestrator | 2026-03-05 01:04:17.721064 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-05 01:04:17.721077 | orchestrator | Thursday 05 March 2026 01:04:16 +0000 (0:00:03.599) 0:02:39.923 ******** 2026-03-05 01:04:17.721090 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:04:17.721103 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:04:17.721115 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:04:17.721152 | orchestrator | 2026-03-05 01:04:17.721166 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:04:17.721181 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-05 01:04:17.721196 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:04:17.721209 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:04:17.721223 | orchestrator | 2026-03-05 01:04:17.721236 | orchestrator | 2026-03-05 01:04:17.721249 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:04:17.721260 | orchestrator | Thursday 05 March 2026 01:04:17 +0000 (0:00:00.438) 0:02:40.361 ******** 2026-03-05 01:04:17.721273 | orchestrator | =============================================================================== 2026-03-05 01:04:17.721285 | orchestrator | service-ks-register : keystone | Creating services --------------------- 27.22s 2026-03-05 01:04:17.721298 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.70s 2026-03-05 01:04:17.721322 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.64s 2026-03-05 01:04:17.721331 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.66s 2026-03-05 01:04:17.721339 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.44s 2026-03-05 01:04:17.721347 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.39s 2026-03-05 01:04:17.721355 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.91s 2026-03-05 01:04:17.721363 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.53s 2026-03-05 01:04:17.721379 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.66s 2026-03-05 01:04:17.721387 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.13s 2026-03-05 01:04:17.721401 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.72s 2026-03-05 01:04:17.721409 | orchestrator | keystone : Creating default user role ----------------------------------- 3.60s 2026-03-05 01:04:17.721417 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.51s 2026-03-05 01:04:17.721425 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.97s 2026-03-05 01:04:17.721433 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.60s 2026-03-05 01:04:17.721441 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.52s 2026-03-05 01:04:17.721449 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.49s 2026-03-05 01:04:17.721457 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.91s 2026-03-05 01:04:17.721465 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.63s 2026-03-05 01:04:17.721472 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.62s 2026-03-05 01:04:17.721481 | orchestrator | 2026-03-05 01:04:17 | INFO  | Task ed0e3286-ab60-4825-ba22-a7712a9488c0 is in state SUCCESS 2026-03-05 01:04:17.721489 | orchestrator | 2026-03-05 01:04:17 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:17.721497 | orchestrator | 2026-03-05 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:20.744677 | orchestrator | 2026-03-05 01:04:20 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:20.745388 | orchestrator | 2026-03-05 01:04:20 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:20.746521 | orchestrator | 2026-03-05 01:04:20 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:20.750311 | orchestrator | 2026-03-05 01:04:20 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:20.750387 | orchestrator | 2026-03-05 01:04:20 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:20.750410 | orchestrator | 2026-03-05 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:23.781116 | orchestrator | 2026-03-05 01:04:23 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:23.781893 | orchestrator | 2026-03-05 01:04:23 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:23.782823 | orchestrator | 2026-03-05 01:04:23 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:23.783841 | orchestrator | 2026-03-05 01:04:23 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:23.785784 | orchestrator | 2026-03-05 01:04:23 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:23.786788 | orchestrator | 2026-03-05 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:26.829737 | orchestrator | 2026-03-05 01:04:26 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:26.832041 | orchestrator | 2026-03-05 01:04:26 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:26.833924 | orchestrator | 2026-03-05 01:04:26 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:26.835796 | orchestrator | 2026-03-05 01:04:26 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:26.837555 | orchestrator | 2026-03-05 01:04:26 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:26.837625 | orchestrator | 2026-03-05 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:29.886292 | orchestrator | 2026-03-05 01:04:29 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:29.888962 | orchestrator | 2026-03-05 01:04:29 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:29.891848 | orchestrator | 2026-03-05 01:04:29 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:29.894255 | orchestrator | 2026-03-05 01:04:29 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:29.896519 | orchestrator | 2026-03-05 01:04:29 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:29.896572 | orchestrator | 2026-03-05 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:32.952891 | orchestrator | 2026-03-05 01:04:32 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:32.955367 | orchestrator | 2026-03-05 01:04:32 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:32.957313 | orchestrator | 2026-03-05 01:04:32 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:32.959305 | orchestrator | 2026-03-05 01:04:32 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:32.961760 | orchestrator | 2026-03-05 01:04:32 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:32.961839 | orchestrator | 2026-03-05 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:36.003791 | orchestrator | 2026-03-05 01:04:36 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:36.006494 | orchestrator | 2026-03-05 01:04:36 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:36.007890 | orchestrator | 2026-03-05 01:04:36 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:36.010185 | orchestrator | 2026-03-05 01:04:36 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:36.012632 | orchestrator | 2026-03-05 01:04:36 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:36.012681 | orchestrator | 2026-03-05 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:39.068467 | orchestrator | 2026-03-05 01:04:39 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:39.069811 | orchestrator | 2026-03-05 01:04:39 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:39.071509 | orchestrator | 2026-03-05 01:04:39 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:39.073581 | orchestrator | 2026-03-05 01:04:39 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:39.075147 | orchestrator | 2026-03-05 01:04:39 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:39.075212 | orchestrator | 2026-03-05 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:42.119284 | orchestrator | 2026-03-05 01:04:42 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:42.122997 | orchestrator | 2026-03-05 01:04:42 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state STARTED 2026-03-05 01:04:42.123484 | orchestrator | 2026-03-05 01:04:42 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:42.125022 | orchestrator | 2026-03-05 01:04:42 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:42.126335 | orchestrator | 2026-03-05 01:04:42 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:42.126374 | orchestrator | 2026-03-05 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:45.178438 | orchestrator | 2026-03-05 01:04:45 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:45.181316 | orchestrator | 2026-03-05 01:04:45 | INFO  | Task a586a961-497b-451b-972e-fb87f17e9252 is in state SUCCESS 2026-03-05 01:04:45.185544 | orchestrator | 2026-03-05 01:04:45 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:45.187603 | orchestrator | 2026-03-05 01:04:45 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:45.190463 | orchestrator | 2026-03-05 01:04:45 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:45.191048 | orchestrator | 2026-03-05 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:48.247893 | orchestrator | 2026-03-05 01:04:48 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:48.250920 | orchestrator | 2026-03-05 01:04:48 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:04:48.253525 | orchestrator | 2026-03-05 01:04:48 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:48.255669 | orchestrator | 2026-03-05 01:04:48 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:48.257757 | orchestrator | 2026-03-05 01:04:48 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:48.257808 | orchestrator | 2026-03-05 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:51.311536 | orchestrator | 2026-03-05 01:04:51 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:51.313254 | orchestrator | 2026-03-05 01:04:51 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:04:51.315242 | orchestrator | 2026-03-05 01:04:51 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:51.320036 | orchestrator | 2026-03-05 01:04:51 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:51.324837 | orchestrator | 2026-03-05 01:04:51 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:51.336485 | orchestrator | 2026-03-05 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:54.364881 | orchestrator | 2026-03-05 01:04:54 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:54.367338 | orchestrator | 2026-03-05 01:04:54 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:04:54.369552 | orchestrator | 2026-03-05 01:04:54 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:54.371751 | orchestrator | 2026-03-05 01:04:54 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:54.373880 | orchestrator | 2026-03-05 01:04:54 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:54.373944 | orchestrator | 2026-03-05 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:04:57.426151 | orchestrator | 2026-03-05 01:04:57 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:04:57.429663 | orchestrator | 2026-03-05 01:04:57 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:04:57.432944 | orchestrator | 2026-03-05 01:04:57 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:04:57.434968 | orchestrator | 2026-03-05 01:04:57 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:04:57.436899 | orchestrator | 2026-03-05 01:04:57 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:04:57.436953 | orchestrator | 2026-03-05 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:00.480374 | orchestrator | 2026-03-05 01:05:00 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:00.481447 | orchestrator | 2026-03-05 01:05:00 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:00.482262 | orchestrator | 2026-03-05 01:05:00 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:00.483419 | orchestrator | 2026-03-05 01:05:00 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:00.484475 | orchestrator | 2026-03-05 01:05:00 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:00.484509 | orchestrator | 2026-03-05 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:03.551436 | orchestrator | 2026-03-05 01:05:03 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:03.553003 | orchestrator | 2026-03-05 01:05:03 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:03.554327 | orchestrator | 2026-03-05 01:05:03 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:03.555384 | orchestrator | 2026-03-05 01:05:03 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:03.556648 | orchestrator | 2026-03-05 01:05:03 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:03.556681 | orchestrator | 2026-03-05 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:06.602239 | orchestrator | 2026-03-05 01:05:06 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:06.603246 | orchestrator | 2026-03-05 01:05:06 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:06.604730 | orchestrator | 2026-03-05 01:05:06 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:06.605472 | orchestrator | 2026-03-05 01:05:06 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:06.607866 | orchestrator | 2026-03-05 01:05:06 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:06.607922 | orchestrator | 2026-03-05 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:09.650866 | orchestrator | 2026-03-05 01:05:09 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:09.652740 | orchestrator | 2026-03-05 01:05:09 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:09.655681 | orchestrator | 2026-03-05 01:05:09 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:09.656236 | orchestrator | 2026-03-05 01:05:09 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:09.657457 | orchestrator | 2026-03-05 01:05:09 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:09.657517 | orchestrator | 2026-03-05 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:12.702911 | orchestrator | 2026-03-05 01:05:12 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:12.705366 | orchestrator | 2026-03-05 01:05:12 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:12.707160 | orchestrator | 2026-03-05 01:05:12 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:12.709674 | orchestrator | 2026-03-05 01:05:12 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:12.710716 | orchestrator | 2026-03-05 01:05:12 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:12.710734 | orchestrator | 2026-03-05 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:15.764046 | orchestrator | 2026-03-05 01:05:15 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:15.771651 | orchestrator | 2026-03-05 01:05:15 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:15.771767 | orchestrator | 2026-03-05 01:05:15 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:15.771779 | orchestrator | 2026-03-05 01:05:15 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:15.771786 | orchestrator | 2026-03-05 01:05:15 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:15.771794 | orchestrator | 2026-03-05 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:18.805083 | orchestrator | 2026-03-05 01:05:18 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:18.805191 | orchestrator | 2026-03-05 01:05:18 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:18.806219 | orchestrator | 2026-03-05 01:05:18 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:18.806768 | orchestrator | 2026-03-05 01:05:18 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:18.807479 | orchestrator | 2026-03-05 01:05:18 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:18.807508 | orchestrator | 2026-03-05 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:21.849003 | orchestrator | 2026-03-05 01:05:21 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:21.850310 | orchestrator | 2026-03-05 01:05:21 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:21.850426 | orchestrator | 2026-03-05 01:05:21 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:21.852305 | orchestrator | 2026-03-05 01:05:21 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:21.853093 | orchestrator | 2026-03-05 01:05:21 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:21.853165 | orchestrator | 2026-03-05 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:24.902565 | orchestrator | 2026-03-05 01:05:24 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:24.902830 | orchestrator | 2026-03-05 01:05:24 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:24.903713 | orchestrator | 2026-03-05 01:05:24 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:24.904678 | orchestrator | 2026-03-05 01:05:24 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:24.905646 | orchestrator | 2026-03-05 01:05:24 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:24.905737 | orchestrator | 2026-03-05 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:27.939469 | orchestrator | 2026-03-05 01:05:27 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:27.940327 | orchestrator | 2026-03-05 01:05:27 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:27.941984 | orchestrator | 2026-03-05 01:05:27 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:27.942767 | orchestrator | 2026-03-05 01:05:27 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:27.943648 | orchestrator | 2026-03-05 01:05:27 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:27.944164 | orchestrator | 2026-03-05 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:30.975650 | orchestrator | 2026-03-05 01:05:30 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:30.975857 | orchestrator | 2026-03-05 01:05:30 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:30.976715 | orchestrator | 2026-03-05 01:05:30 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:30.977210 | orchestrator | 2026-03-05 01:05:30 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:30.977936 | orchestrator | 2026-03-05 01:05:30 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:30.977958 | orchestrator | 2026-03-05 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:34.055818 | orchestrator | 2026-03-05 01:05:34 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:34.055901 | orchestrator | 2026-03-05 01:05:34 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:34.055910 | orchestrator | 2026-03-05 01:05:34 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:34.055917 | orchestrator | 2026-03-05 01:05:34 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:34.055924 | orchestrator | 2026-03-05 01:05:34 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:34.055931 | orchestrator | 2026-03-05 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:37.078315 | orchestrator | 2026-03-05 01:05:37 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:37.078620 | orchestrator | 2026-03-05 01:05:37 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:37.079372 | orchestrator | 2026-03-05 01:05:37 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:37.079940 | orchestrator | 2026-03-05 01:05:37 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:37.080713 | orchestrator | 2026-03-05 01:05:37 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:37.080757 | orchestrator | 2026-03-05 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:40.106183 | orchestrator | 2026-03-05 01:05:40 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:40.106361 | orchestrator | 2026-03-05 01:05:40 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:40.107254 | orchestrator | 2026-03-05 01:05:40 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:40.107748 | orchestrator | 2026-03-05 01:05:40 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:40.108511 | orchestrator | 2026-03-05 01:05:40 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:40.108545 | orchestrator | 2026-03-05 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:43.155634 | orchestrator | 2026-03-05 01:05:43 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:43.161209 | orchestrator | 2026-03-05 01:05:43 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:43.161735 | orchestrator | 2026-03-05 01:05:43 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:43.162794 | orchestrator | 2026-03-05 01:05:43 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:43.164026 | orchestrator | 2026-03-05 01:05:43 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:43.164202 | orchestrator | 2026-03-05 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:46.206358 | orchestrator | 2026-03-05 01:05:46 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:46.206800 | orchestrator | 2026-03-05 01:05:46 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:46.207787 | orchestrator | 2026-03-05 01:05:46 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:46.208571 | orchestrator | 2026-03-05 01:05:46 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:46.209420 | orchestrator | 2026-03-05 01:05:46 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:46.209444 | orchestrator | 2026-03-05 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:49.251989 | orchestrator | 2026-03-05 01:05:49 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:49.252595 | orchestrator | 2026-03-05 01:05:49 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:49.254287 | orchestrator | 2026-03-05 01:05:49 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:49.256217 | orchestrator | 2026-03-05 01:05:49 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:49.257470 | orchestrator | 2026-03-05 01:05:49 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:49.257510 | orchestrator | 2026-03-05 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:52.291043 | orchestrator | 2026-03-05 01:05:52 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:52.292664 | orchestrator | 2026-03-05 01:05:52 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:52.295276 | orchestrator | 2026-03-05 01:05:52 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:52.296236 | orchestrator | 2026-03-05 01:05:52 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:52.297530 | orchestrator | 2026-03-05 01:05:52 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:52.297846 | orchestrator | 2026-03-05 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:55.340780 | orchestrator | 2026-03-05 01:05:55 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:55.342868 | orchestrator | 2026-03-05 01:05:55 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:55.343661 | orchestrator | 2026-03-05 01:05:55 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:55.345560 | orchestrator | 2026-03-05 01:05:55 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:55.347603 | orchestrator | 2026-03-05 01:05:55 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:55.347646 | orchestrator | 2026-03-05 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:05:58.429443 | orchestrator | 2026-03-05 01:05:58 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:05:58.429793 | orchestrator | 2026-03-05 01:05:58 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:05:58.430746 | orchestrator | 2026-03-05 01:05:58 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:05:58.431483 | orchestrator | 2026-03-05 01:05:58 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:05:58.432365 | orchestrator | 2026-03-05 01:05:58 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:05:58.432412 | orchestrator | 2026-03-05 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:01.468476 | orchestrator | 2026-03-05 01:06:01 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:01.470211 | orchestrator | 2026-03-05 01:06:01 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:06:01.471783 | orchestrator | 2026-03-05 01:06:01 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:01.474238 | orchestrator | 2026-03-05 01:06:01 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:01.479752 | orchestrator | 2026-03-05 01:06:01 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:01.480322 | orchestrator | 2026-03-05 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:04.547949 | orchestrator | 2026-03-05 01:06:04 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:04.548658 | orchestrator | 2026-03-05 01:06:04 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:06:04.552068 | orchestrator | 2026-03-05 01:06:04 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:04.553483 | orchestrator | 2026-03-05 01:06:04 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:04.556927 | orchestrator | 2026-03-05 01:06:04 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:04.556997 | orchestrator | 2026-03-05 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:07.586650 | orchestrator | 2026-03-05 01:06:07 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:07.588618 | orchestrator | 2026-03-05 01:06:07 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:06:07.589471 | orchestrator | 2026-03-05 01:06:07 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:07.590318 | orchestrator | 2026-03-05 01:06:07 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:07.591301 | orchestrator | 2026-03-05 01:06:07 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:07.591413 | orchestrator | 2026-03-05 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:10.635108 | orchestrator | 2026-03-05 01:06:10 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:10.635649 | orchestrator | 2026-03-05 01:06:10 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:06:10.636539 | orchestrator | 2026-03-05 01:06:10 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:10.638390 | orchestrator | 2026-03-05 01:06:10 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:10.639123 | orchestrator | 2026-03-05 01:06:10 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:10.639218 | orchestrator | 2026-03-05 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:13.664753 | orchestrator | 2026-03-05 01:06:13 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:13.667831 | orchestrator | 2026-03-05 01:06:13 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:06:13.670850 | orchestrator | 2026-03-05 01:06:13 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:13.672964 | orchestrator | 2026-03-05 01:06:13 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:13.674780 | orchestrator | 2026-03-05 01:06:13 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:13.675192 | orchestrator | 2026-03-05 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:16.707419 | orchestrator | 2026-03-05 01:06:16 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:16.708033 | orchestrator | 2026-03-05 01:06:16 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:06:16.709036 | orchestrator | 2026-03-05 01:06:16 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:16.710924 | orchestrator | 2026-03-05 01:06:16 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:16.712141 | orchestrator | 2026-03-05 01:06:16 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:16.712192 | orchestrator | 2026-03-05 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:19.747379 | orchestrator | 2026-03-05 01:06:19 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:19.748027 | orchestrator | 2026-03-05 01:06:19 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state STARTED 2026-03-05 01:06:19.748823 | orchestrator | 2026-03-05 01:06:19 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:19.750274 | orchestrator | 2026-03-05 01:06:19 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:19.751468 | orchestrator | 2026-03-05 01:06:19 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:19.751511 | orchestrator | 2026-03-05 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:22.790002 | orchestrator | 2026-03-05 01:06:22 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:22.790746 | orchestrator | 2026-03-05 01:06:22 | INFO  | Task acaaf09c-1c0b-47f4-8762-57c51ddadb6b is in state SUCCESS 2026-03-05 01:06:22.791207 | orchestrator | 2026-03-05 01:06:22.791237 | orchestrator | 2026-03-05 01:06:22.791244 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-05 01:06:22.791253 | orchestrator | 2026-03-05 01:06:22.791260 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-05 01:06:22.791267 | orchestrator | Thursday 05 March 2026 01:03:46 +0000 (0:00:00.211) 0:00:00.211 ******** 2026-03-05 01:06:22.791274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-05 01:06:22.791283 | orchestrator | 2026-03-05 01:06:22.791289 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-05 01:06:22.791295 | orchestrator | Thursday 05 March 2026 01:03:47 +0000 (0:00:00.207) 0:00:00.418 ******** 2026-03-05 01:06:22.791302 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-05 01:06:22.791309 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-05 01:06:22.791316 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-05 01:06:22.791323 | orchestrator | 2026-03-05 01:06:22.791329 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-05 01:06:22.791335 | orchestrator | Thursday 05 March 2026 01:03:48 +0000 (0:00:01.137) 0:00:01.556 ******** 2026-03-05 01:06:22.791341 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-05 01:06:22.791347 | orchestrator | 2026-03-05 01:06:22.791352 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-05 01:06:22.791358 | orchestrator | Thursday 05 March 2026 01:03:49 +0000 (0:00:01.258) 0:00:02.815 ******** 2026-03-05 01:06:22.791364 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791371 | orchestrator | 2026-03-05 01:06:22.791377 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-05 01:06:22.791384 | orchestrator | Thursday 05 March 2026 01:03:50 +0000 (0:00:00.901) 0:00:03.717 ******** 2026-03-05 01:06:22.791390 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791396 | orchestrator | 2026-03-05 01:06:22.791402 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-05 01:06:22.791408 | orchestrator | Thursday 05 March 2026 01:03:51 +0000 (0:00:00.866) 0:00:04.583 ******** 2026-03-05 01:06:22.791415 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-05 01:06:22.791422 | orchestrator | ok: [testbed-manager] 2026-03-05 01:06:22.791428 | orchestrator | 2026-03-05 01:06:22.791434 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-05 01:06:22.791440 | orchestrator | Thursday 05 March 2026 01:04:33 +0000 (0:00:42.623) 0:00:47.207 ******** 2026-03-05 01:06:22.791448 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-05 01:06:22.791453 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-05 01:06:22.791457 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-05 01:06:22.791461 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-05 01:06:22.791465 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-05 01:06:22.791469 | orchestrator | 2026-03-05 01:06:22.791473 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-05 01:06:22.791477 | orchestrator | Thursday 05 March 2026 01:04:38 +0000 (0:00:04.269) 0:00:51.476 ******** 2026-03-05 01:06:22.791481 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-05 01:06:22.791485 | orchestrator | 2026-03-05 01:06:22.791489 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-05 01:06:22.791493 | orchestrator | Thursday 05 March 2026 01:04:38 +0000 (0:00:00.484) 0:00:51.961 ******** 2026-03-05 01:06:22.791497 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:22.791501 | orchestrator | 2026-03-05 01:06:22.791504 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-05 01:06:22.791519 | orchestrator | Thursday 05 March 2026 01:04:38 +0000 (0:00:00.141) 0:00:52.102 ******** 2026-03-05 01:06:22.791523 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:22.791526 | orchestrator | 2026-03-05 01:06:22.791530 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-05 01:06:22.791534 | orchestrator | Thursday 05 March 2026 01:04:39 +0000 (0:00:00.514) 0:00:52.617 ******** 2026-03-05 01:06:22.791538 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791542 | orchestrator | 2026-03-05 01:06:22.791545 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-05 01:06:22.791549 | orchestrator | Thursday 05 March 2026 01:04:40 +0000 (0:00:01.691) 0:00:54.309 ******** 2026-03-05 01:06:22.791553 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791557 | orchestrator | 2026-03-05 01:06:22.791561 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-05 01:06:22.791564 | orchestrator | Thursday 05 March 2026 01:04:41 +0000 (0:00:00.806) 0:00:55.115 ******** 2026-03-05 01:06:22.791568 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791572 | orchestrator | 2026-03-05 01:06:22.791576 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-05 01:06:22.791580 | orchestrator | Thursday 05 March 2026 01:04:42 +0000 (0:00:00.616) 0:00:55.731 ******** 2026-03-05 01:06:22.791595 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-05 01:06:22.791599 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-05 01:06:22.791603 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-05 01:06:22.791607 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-05 01:06:22.791610 | orchestrator | 2026-03-05 01:06:22.791614 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:06:22.791619 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-05 01:06:22.791624 | orchestrator | 2026-03-05 01:06:22.791628 | orchestrator | 2026-03-05 01:06:22.791641 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:06:22.791645 | orchestrator | Thursday 05 March 2026 01:04:43 +0000 (0:00:01.573) 0:00:57.305 ******** 2026-03-05 01:06:22.791649 | orchestrator | =============================================================================== 2026-03-05 01:06:22.791653 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.62s 2026-03-05 01:06:22.791656 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.27s 2026-03-05 01:06:22.791660 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.69s 2026-03-05 01:06:22.791664 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.57s 2026-03-05 01:06:22.791668 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.26s 2026-03-05 01:06:22.791671 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.14s 2026-03-05 01:06:22.791675 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2026-03-05 01:06:22.791679 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.87s 2026-03-05 01:06:22.791683 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.81s 2026-03-05 01:06:22.791687 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-03-05 01:06:22.791690 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.51s 2026-03-05 01:06:22.791694 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2026-03-05 01:06:22.791698 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-03-05 01:06:22.791702 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-05 01:06:22.791706 | orchestrator | 2026-03-05 01:06:22.791709 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-05 01:06:22.791713 | orchestrator | 2.16.14 2026-03-05 01:06:22.791721 | orchestrator | 2026-03-05 01:06:22.791725 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-05 01:06:22.791728 | orchestrator | 2026-03-05 01:06:22.791732 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-05 01:06:22.791736 | orchestrator | Thursday 05 March 2026 01:04:48 +0000 (0:00:00.311) 0:00:00.311 ******** 2026-03-05 01:06:22.791740 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791743 | orchestrator | 2026-03-05 01:06:22.791747 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-05 01:06:22.791751 | orchestrator | Thursday 05 March 2026 01:04:50 +0000 (0:00:02.010) 0:00:02.321 ******** 2026-03-05 01:06:22.791755 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791759 | orchestrator | 2026-03-05 01:06:22.791762 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-05 01:06:22.791766 | orchestrator | Thursday 05 March 2026 01:04:51 +0000 (0:00:01.156) 0:00:03.478 ******** 2026-03-05 01:06:22.791770 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791774 | orchestrator | 2026-03-05 01:06:22.791777 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-05 01:06:22.791781 | orchestrator | Thursday 05 March 2026 01:04:53 +0000 (0:00:01.152) 0:00:04.631 ******** 2026-03-05 01:06:22.791785 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791789 | orchestrator | 2026-03-05 01:06:22.791792 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-05 01:06:22.791796 | orchestrator | Thursday 05 March 2026 01:04:54 +0000 (0:00:01.355) 0:00:05.986 ******** 2026-03-05 01:06:22.791800 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791804 | orchestrator | 2026-03-05 01:06:22.791808 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-05 01:06:22.791811 | orchestrator | Thursday 05 March 2026 01:04:55 +0000 (0:00:01.057) 0:00:07.044 ******** 2026-03-05 01:06:22.791815 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791819 | orchestrator | 2026-03-05 01:06:22.791823 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-05 01:06:22.791826 | orchestrator | Thursday 05 March 2026 01:04:56 +0000 (0:00:01.126) 0:00:08.170 ******** 2026-03-05 01:06:22.791830 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791834 | orchestrator | 2026-03-05 01:06:22.791838 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-05 01:06:22.791842 | orchestrator | Thursday 05 March 2026 01:04:58 +0000 (0:00:02.098) 0:00:10.268 ******** 2026-03-05 01:06:22.791845 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791849 | orchestrator | 2026-03-05 01:06:22.791853 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-05 01:06:22.791857 | orchestrator | Thursday 05 March 2026 01:05:00 +0000 (0:00:01.296) 0:00:11.565 ******** 2026-03-05 01:06:22.791860 | orchestrator | changed: [testbed-manager] 2026-03-05 01:06:22.791864 | orchestrator | 2026-03-05 01:06:22.791868 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-05 01:06:22.791872 | orchestrator | Thursday 05 March 2026 01:05:56 +0000 (0:00:56.607) 0:01:08.172 ******** 2026-03-05 01:06:22.791876 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:06:22.791880 | orchestrator | 2026-03-05 01:06:22.791883 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-05 01:06:22.791887 | orchestrator | 2026-03-05 01:06:22.791894 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-05 01:06:22.791898 | orchestrator | Thursday 05 March 2026 01:05:56 +0000 (0:00:00.178) 0:01:08.351 ******** 2026-03-05 01:06:22.791902 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:22.791906 | orchestrator | 2026-03-05 01:06:22.791909 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-05 01:06:22.791913 | orchestrator | 2026-03-05 01:06:22.791917 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-05 01:06:22.791921 | orchestrator | Thursday 05 March 2026 01:06:08 +0000 (0:00:11.849) 0:01:20.200 ******** 2026-03-05 01:06:22.791928 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:22.791932 | orchestrator | 2026-03-05 01:06:22.791938 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-05 01:06:22.791942 | orchestrator | 2026-03-05 01:06:22.791946 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-05 01:06:22.791950 | orchestrator | Thursday 05 March 2026 01:06:20 +0000 (0:00:11.386) 0:01:31.587 ******** 2026-03-05 01:06:22.791954 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:22.791958 | orchestrator | 2026-03-05 01:06:22.791961 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:06:22.791965 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-05 01:06:22.791969 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:06:22.791973 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:06:22.791977 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:06:22.791981 | orchestrator | 2026-03-05 01:06:22.791985 | orchestrator | 2026-03-05 01:06:22.791989 | orchestrator | 2026-03-05 01:06:22.791993 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:06:22.791996 | orchestrator | Thursday 05 March 2026 01:06:21 +0000 (0:00:01.404) 0:01:32.992 ******** 2026-03-05 01:06:22.792000 | orchestrator | =============================================================================== 2026-03-05 01:06:22.792004 | orchestrator | Create admin user ------------------------------------------------------ 56.61s 2026-03-05 01:06:22.792008 | orchestrator | Restart ceph manager service ------------------------------------------- 24.64s 2026-03-05 01:06:22.792012 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.10s 2026-03-05 01:06:22.792015 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.01s 2026-03-05 01:06:22.792019 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.36s 2026-03-05 01:06:22.792023 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.30s 2026-03-05 01:06:22.792027 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.16s 2026-03-05 01:06:22.792031 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.15s 2026-03-05 01:06:22.792034 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.13s 2026-03-05 01:06:22.792038 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2026-03-05 01:06:22.792042 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-03-05 01:06:22.794074 | orchestrator | 2026-03-05 01:06:22 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:22.795905 | orchestrator | 2026-03-05 01:06:22 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:22.797005 | orchestrator | 2026-03-05 01:06:22 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:22.797253 | orchestrator | 2026-03-05 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:25.830529 | orchestrator | 2026-03-05 01:06:25 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:25.832587 | orchestrator | 2026-03-05 01:06:25 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:25.833462 | orchestrator | 2026-03-05 01:06:25 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:25.834561 | orchestrator | 2026-03-05 01:06:25 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:25.834689 | orchestrator | 2026-03-05 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:28.878248 | orchestrator | 2026-03-05 01:06:28 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:28.879608 | orchestrator | 2026-03-05 01:06:28 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:28.881592 | orchestrator | 2026-03-05 01:06:28 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:28.882540 | orchestrator | 2026-03-05 01:06:28 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:28.882613 | orchestrator | 2026-03-05 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:31.918348 | orchestrator | 2026-03-05 01:06:31 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state STARTED 2026-03-05 01:06:31.920358 | orchestrator | 2026-03-05 01:06:31 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:31.920419 | orchestrator | 2026-03-05 01:06:31 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:31.921383 | orchestrator | 2026-03-05 01:06:31 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:31.921444 | orchestrator | 2026-03-05 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:34.980444 | orchestrator | 2026-03-05 01:06:34 | INFO  | Task baeeb37d-bb3b-4085-b679-cd367e610bb7 is in state SUCCESS 2026-03-05 01:06:34.981413 | orchestrator | 2026-03-05 01:06:34.981449 | orchestrator | 2026-03-05 01:06:34.981454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:06:34.981459 | orchestrator | 2026-03-05 01:06:34.981464 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:06:34.981469 | orchestrator | Thursday 05 March 2026 01:04:22 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-03-05 01:06:34.981474 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:06:34.981479 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:06:34.981483 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:06:34.981487 | orchestrator | 2026-03-05 01:06:34.981491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:06:34.981496 | orchestrator | Thursday 05 March 2026 01:04:22 +0000 (0:00:00.400) 0:00:00.658 ******** 2026-03-05 01:06:34.981500 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-05 01:06:34.981505 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-05 01:06:34.981509 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-05 01:06:34.981513 | orchestrator | 2026-03-05 01:06:34.981517 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-05 01:06:34.981521 | orchestrator | 2026-03-05 01:06:34.981524 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-05 01:06:34.981528 | orchestrator | Thursday 05 March 2026 01:04:23 +0000 (0:00:00.631) 0:00:01.290 ******** 2026-03-05 01:06:34.981532 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:06:34.981537 | orchestrator | 2026-03-05 01:06:34.981541 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-05 01:06:34.981545 | orchestrator | Thursday 05 March 2026 01:04:23 +0000 (0:00:00.554) 0:00:01.844 ******** 2026-03-05 01:06:34.981549 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-05 01:06:34.981553 | orchestrator | 2026-03-05 01:06:34.981557 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-05 01:06:34.981561 | orchestrator | Thursday 05 March 2026 01:04:28 +0000 (0:00:04.364) 0:00:06.209 ******** 2026-03-05 01:06:34.981582 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-05 01:06:34.981587 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-05 01:06:34.981591 | orchestrator | 2026-03-05 01:06:34.981595 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-05 01:06:34.981598 | orchestrator | Thursday 05 March 2026 01:04:35 +0000 (0:00:07.446) 0:00:13.655 ******** 2026-03-05 01:06:34.981602 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-05 01:06:34.981606 | orchestrator | 2026-03-05 01:06:34.981610 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-05 01:06:34.981614 | orchestrator | Thursday 05 March 2026 01:04:39 +0000 (0:00:03.802) 0:00:17.457 ******** 2026-03-05 01:06:34.981618 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:06:34.981622 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-05 01:06:34.981626 | orchestrator | 2026-03-05 01:06:34.981630 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-05 01:06:34.981634 | orchestrator | Thursday 05 March 2026 01:04:43 +0000 (0:00:04.369) 0:00:21.827 ******** 2026-03-05 01:06:34.981638 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:06:34.981642 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-05 01:06:34.981646 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-05 01:06:34.981650 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-05 01:06:34.981654 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-05 01:06:34.981658 | orchestrator | 2026-03-05 01:06:34.981662 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-05 01:06:34.981665 | orchestrator | Thursday 05 March 2026 01:05:01 +0000 (0:00:17.539) 0:00:39.367 ******** 2026-03-05 01:06:34.981669 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-05 01:06:34.981673 | orchestrator | 2026-03-05 01:06:34.981677 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-05 01:06:34.981681 | orchestrator | Thursday 05 March 2026 01:05:05 +0000 (0:00:03.969) 0:00:43.336 ******** 2026-03-05 01:06:34.981698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.981712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.981721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.981726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.981733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.981740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.981748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.981754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.981761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.981765 | orchestrator | 2026-03-05 01:06:34.981769 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-05 01:06:34.981773 | orchestrator | Thursday 05 March 2026 01:05:07 +0000 (0:00:02.005) 0:00:45.342 ******** 2026-03-05 01:06:34.981777 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-05 01:06:34.981781 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-05 01:06:34.981785 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-05 01:06:34.981789 | orchestrator | 2026-03-05 01:06:34.981792 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-05 01:06:34.981796 | orchestrator | Thursday 05 March 2026 01:05:09 +0000 (0:00:02.093) 0:00:47.436 ******** 2026-03-05 01:06:34.981800 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:34.981804 | orchestrator | 2026-03-05 01:06:34.981808 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-05 01:06:34.981812 | orchestrator | Thursday 05 March 2026 01:05:09 +0000 (0:00:00.136) 0:00:47.572 ******** 2026-03-05 01:06:34.981816 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:34.981819 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:34.981823 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:34.981827 | orchestrator | 2026-03-05 01:06:34.981831 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-05 01:06:34.981835 | orchestrator | Thursday 05 March 2026 01:05:10 +0000 (0:00:00.627) 0:00:48.200 ******** 2026-03-05 01:06:34.981838 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:06:34.981842 | orchestrator | 2026-03-05 01:06:34.981846 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-05 01:06:34.981870 | orchestrator | Thursday 05 March 2026 01:05:11 +0000 (0:00:01.268) 0:00:49.469 ******** 2026-03-05 01:06:34.981921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.981929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.981938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.981942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.981947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.981987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.981995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982046 | orchestrator | 2026-03-05 01:06:34.982051 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-05 01:06:34.982055 | orchestrator | Thursday 05 March 2026 01:05:15 +0000 (0:00:04.106) 0:00:53.575 ******** 2026-03-05 01:06:34.982059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:06:34.982063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:06:34.982118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982127 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:34.982159 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:34.982166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:06:34.982177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982197 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:34.982204 | orchestrator | 2026-03-05 01:06:34.982215 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-05 01:06:34.982221 | orchestrator | Thursday 05 March 2026 01:05:17 +0000 (0:00:02.573) 0:00:56.148 ******** 2026-03-05 01:06:34.982228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:06:34.982234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982246 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:34.982253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:06:34.982275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982292 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:34.982299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:06:34.982305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982321 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:34.982327 | orchestrator | 2026-03-05 01:06:34.982333 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-05 01:06:34.982339 | orchestrator | Thursday 05 March 2026 01:05:20 +0000 (0:00:02.072) 0:00:58.220 ******** 2026-03-05 01:06:34.982348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.982380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.982387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.982393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982443 | orchestrator | 2026-03-05 01:06:34.982449 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-05 01:06:34.982456 | orchestrator | Thursday 05 March 2026 01:05:24 +0000 (0:00:04.102) 0:01:02.323 ******** 2026-03-05 01:06:34.982466 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:34.982472 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:34.982478 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:34.982484 | orchestrator | 2026-03-05 01:06:34.982490 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-05 01:06:34.982496 | orchestrator | Thursday 05 March 2026 01:05:28 +0000 (0:00:04.074) 0:01:06.397 ******** 2026-03-05 01:06:34.982503 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:06:34.982509 | orchestrator | 2026-03-05 01:06:34.982514 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-05 01:06:34.982520 | orchestrator | Thursday 05 March 2026 01:05:29 +0000 (0:00:01.208) 0:01:07.606 ******** 2026-03-05 01:06:34.982531 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:34.982538 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:34.982544 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:34.982550 | orchestrator | 2026-03-05 01:06:34.982596 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-05 01:06:34.982600 | orchestrator | Thursday 05 March 2026 01:05:30 +0000 (0:00:00.929) 0:01:08.535 ******** 2026-03-05 01:06:34.982605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.982617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.982622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.982627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982662 | orchestrator | 2026-03-05 01:06:34.982665 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-05 01:06:34.982669 | orchestrator | Thursday 05 March 2026 01:05:41 +0000 (0:00:10.932) 0:01:19.467 ******** 2026-03-05 01:06:34.982673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:06:34.982681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982689 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:34.982700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:06:34.982704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982715 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:34.982720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-05 01:06:34.982724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:06:34.982735 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:34.982739 | orchestrator | 2026-03-05 01:06:34.982743 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-05 01:06:34.982747 | orchestrator | Thursday 05 March 2026 01:05:42 +0000 (0:00:01.025) 0:01:20.493 ******** 2026-03-05 01:06:34.982754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.982759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.982766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-05 01:06:34.982770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:06:34.982806 | orchestrator | 2026-03-05 01:06:34.982810 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-05 01:06:34.982814 | orchestrator | Thursday 05 March 2026 01:05:46 +0000 (0:00:04.457) 0:01:24.951 ******** 2026-03-05 01:06:34.982818 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:06:34.982822 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:06:34.982826 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:06:34.982829 | orchestrator | 2026-03-05 01:06:34.982833 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-05 01:06:34.982837 | orchestrator | Thursday 05 March 2026 01:05:47 +0000 (0:00:00.881) 0:01:25.832 ******** 2026-03-05 01:06:34.982841 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:34.982845 | orchestrator | 2026-03-05 01:06:34.982848 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-05 01:06:34.982852 | orchestrator | Thursday 05 March 2026 01:05:49 +0000 (0:00:02.261) 0:01:28.094 ******** 2026-03-05 01:06:34.982856 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:34.982860 | orchestrator | 2026-03-05 01:06:34.982863 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-05 01:06:34.982867 | orchestrator | Thursday 05 March 2026 01:05:52 +0000 (0:00:02.409) 0:01:30.503 ******** 2026-03-05 01:06:34.982871 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:34.982875 | orchestrator | 2026-03-05 01:06:34.982879 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-05 01:06:34.982882 | orchestrator | Thursday 05 March 2026 01:06:04 +0000 (0:00:12.435) 0:01:42.939 ******** 2026-03-05 01:06:34.982886 | orchestrator | 2026-03-05 01:06:34.982893 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-05 01:06:34.982897 | orchestrator | Thursday 05 March 2026 01:06:04 +0000 (0:00:00.155) 0:01:43.094 ******** 2026-03-05 01:06:34.982901 | orchestrator | 2026-03-05 01:06:34.982905 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-05 01:06:34.982909 | orchestrator | Thursday 05 March 2026 01:06:05 +0000 (0:00:00.154) 0:01:43.248 ******** 2026-03-05 01:06:34.982913 | orchestrator | 2026-03-05 01:06:34.982916 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-05 01:06:34.982920 | orchestrator | Thursday 05 March 2026 01:06:05 +0000 (0:00:00.087) 0:01:43.336 ******** 2026-03-05 01:06:34.982924 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:34.982928 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:34.982932 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:34.982935 | orchestrator | 2026-03-05 01:06:34.982943 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-05 01:06:34.982946 | orchestrator | Thursday 05 March 2026 01:06:18 +0000 (0:00:13.385) 0:01:56.722 ******** 2026-03-05 01:06:34.982950 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:34.982954 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:34.982961 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:34.982964 | orchestrator | 2026-03-05 01:06:34.982969 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-05 01:06:34.982972 | orchestrator | Thursday 05 March 2026 01:06:25 +0000 (0:00:06.514) 0:02:03.242 ******** 2026-03-05 01:06:34.982976 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:06:34.982980 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:06:34.982984 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:06:34.982988 | orchestrator | 2026-03-05 01:06:34.982991 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:06:34.982996 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:06:34.983001 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:06:34.983005 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:06:34.983008 | orchestrator | 2026-03-05 01:06:34.983012 | orchestrator | 2026-03-05 01:06:34.983016 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:06:34.983020 | orchestrator | Thursday 05 March 2026 01:06:31 +0000 (0:00:06.784) 0:02:10.027 ******** 2026-03-05 01:06:34.983024 | orchestrator | =============================================================================== 2026-03-05 01:06:34.983028 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.54s 2026-03-05 01:06:34.983032 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.39s 2026-03-05 01:06:34.983035 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.44s 2026-03-05 01:06:34.983039 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.93s 2026-03-05 01:06:34.983043 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.45s 2026-03-05 01:06:34.983047 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.78s 2026-03-05 01:06:34.983051 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.52s 2026-03-05 01:06:34.983057 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.46s 2026-03-05 01:06:34.983062 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.37s 2026-03-05 01:06:34.983068 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.36s 2026-03-05 01:06:34.983123 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.11s 2026-03-05 01:06:34.983131 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.10s 2026-03-05 01:06:34.983141 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.07s 2026-03-05 01:06:34.983147 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.97s 2026-03-05 01:06:34.983153 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.80s 2026-03-05 01:06:34.983159 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.57s 2026-03-05 01:06:34.983165 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.41s 2026-03-05 01:06:34.983170 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.26s 2026-03-05 01:06:34.983176 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.09s 2026-03-05 01:06:34.983182 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.07s 2026-03-05 01:06:34.983275 | orchestrator | 2026-03-05 01:06:34 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:06:34.983448 | orchestrator | 2026-03-05 01:06:34 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:34.984621 | orchestrator | 2026-03-05 01:06:34 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:34.985226 | orchestrator | 2026-03-05 01:06:34 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:34.985248 | orchestrator | 2026-03-05 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:38.028656 | orchestrator | 2026-03-05 01:06:38 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:06:38.029769 | orchestrator | 2026-03-05 01:06:38 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:38.032450 | orchestrator | 2026-03-05 01:06:38 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:38.034524 | orchestrator | 2026-03-05 01:06:38 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:38.036009 | orchestrator | 2026-03-05 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:41.081900 | orchestrator | 2026-03-05 01:06:41 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:06:41.082231 | orchestrator | 2026-03-05 01:06:41 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:41.083162 | orchestrator | 2026-03-05 01:06:41 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:41.083905 | orchestrator | 2026-03-05 01:06:41 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:41.083948 | orchestrator | 2026-03-05 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:44.121510 | orchestrator | 2026-03-05 01:06:44 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:06:44.121857 | orchestrator | 2026-03-05 01:06:44 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:44.122602 | orchestrator | 2026-03-05 01:06:44 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:44.123843 | orchestrator | 2026-03-05 01:06:44 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:44.125198 | orchestrator | 2026-03-05 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:47.165203 | orchestrator | 2026-03-05 01:06:47 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:06:47.165301 | orchestrator | 2026-03-05 01:06:47 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:47.165313 | orchestrator | 2026-03-05 01:06:47 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:47.166332 | orchestrator | 2026-03-05 01:06:47 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:47.166404 | orchestrator | 2026-03-05 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:50.205134 | orchestrator | 2026-03-05 01:06:50 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:06:50.206308 | orchestrator | 2026-03-05 01:06:50 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:50.207125 | orchestrator | 2026-03-05 01:06:50 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:50.208607 | orchestrator | 2026-03-05 01:06:50 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:50.208662 | orchestrator | 2026-03-05 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:53.243885 | orchestrator | 2026-03-05 01:06:53 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:06:53.244556 | orchestrator | 2026-03-05 01:06:53 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:53.245466 | orchestrator | 2026-03-05 01:06:53 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:53.246319 | orchestrator | 2026-03-05 01:06:53 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:53.246386 | orchestrator | 2026-03-05 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:56.289199 | orchestrator | 2026-03-05 01:06:56 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:06:56.290124 | orchestrator | 2026-03-05 01:06:56 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:56.291288 | orchestrator | 2026-03-05 01:06:56 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:56.292509 | orchestrator | 2026-03-05 01:06:56 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:56.292536 | orchestrator | 2026-03-05 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:06:59.324502 | orchestrator | 2026-03-05 01:06:59 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:06:59.324667 | orchestrator | 2026-03-05 01:06:59 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:06:59.325852 | orchestrator | 2026-03-05 01:06:59 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:06:59.326473 | orchestrator | 2026-03-05 01:06:59 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:06:59.326503 | orchestrator | 2026-03-05 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:02.382316 | orchestrator | 2026-03-05 01:07:02 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:02.387384 | orchestrator | 2026-03-05 01:07:02 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:02.388339 | orchestrator | 2026-03-05 01:07:02 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:02.389520 | orchestrator | 2026-03-05 01:07:02 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:02.389612 | orchestrator | 2026-03-05 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:05.430451 | orchestrator | 2026-03-05 01:07:05 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:05.432375 | orchestrator | 2026-03-05 01:07:05 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:05.433808 | orchestrator | 2026-03-05 01:07:05 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:05.435487 | orchestrator | 2026-03-05 01:07:05 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:05.436104 | orchestrator | 2026-03-05 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:08.480581 | orchestrator | 2026-03-05 01:07:08 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:08.482246 | orchestrator | 2026-03-05 01:07:08 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:08.484031 | orchestrator | 2026-03-05 01:07:08 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:08.485334 | orchestrator | 2026-03-05 01:07:08 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:08.485457 | orchestrator | 2026-03-05 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:11.530151 | orchestrator | 2026-03-05 01:07:11 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:11.530281 | orchestrator | 2026-03-05 01:07:11 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:11.530337 | orchestrator | 2026-03-05 01:07:11 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:11.530862 | orchestrator | 2026-03-05 01:07:11 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:11.530933 | orchestrator | 2026-03-05 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:14.568908 | orchestrator | 2026-03-05 01:07:14 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:14.571187 | orchestrator | 2026-03-05 01:07:14 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:14.572950 | orchestrator | 2026-03-05 01:07:14 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:14.574726 | orchestrator | 2026-03-05 01:07:14 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:14.574792 | orchestrator | 2026-03-05 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:17.615348 | orchestrator | 2026-03-05 01:07:17 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:17.615452 | orchestrator | 2026-03-05 01:07:17 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:17.615463 | orchestrator | 2026-03-05 01:07:17 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:17.616389 | orchestrator | 2026-03-05 01:07:17 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:17.616439 | orchestrator | 2026-03-05 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:20.656068 | orchestrator | 2026-03-05 01:07:20 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:20.657623 | orchestrator | 2026-03-05 01:07:20 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:20.658809 | orchestrator | 2026-03-05 01:07:20 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:20.659986 | orchestrator | 2026-03-05 01:07:20 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:20.660223 | orchestrator | 2026-03-05 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:23.702536 | orchestrator | 2026-03-05 01:07:23 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:23.702618 | orchestrator | 2026-03-05 01:07:23 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:23.703423 | orchestrator | 2026-03-05 01:07:23 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:23.705623 | orchestrator | 2026-03-05 01:07:23 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:23.705698 | orchestrator | 2026-03-05 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:26.746531 | orchestrator | 2026-03-05 01:07:26 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:26.748201 | orchestrator | 2026-03-05 01:07:26 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:26.749755 | orchestrator | 2026-03-05 01:07:26 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:26.750988 | orchestrator | 2026-03-05 01:07:26 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:26.751028 | orchestrator | 2026-03-05 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:29.802256 | orchestrator | 2026-03-05 01:07:29 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:29.803880 | orchestrator | 2026-03-05 01:07:29 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:29.806260 | orchestrator | 2026-03-05 01:07:29 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:29.807524 | orchestrator | 2026-03-05 01:07:29 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:29.807563 | orchestrator | 2026-03-05 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:32.846581 | orchestrator | 2026-03-05 01:07:32 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:32.848412 | orchestrator | 2026-03-05 01:07:32 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:32.850262 | orchestrator | 2026-03-05 01:07:32 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:32.851727 | orchestrator | 2026-03-05 01:07:32 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:32.851827 | orchestrator | 2026-03-05 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:35.894381 | orchestrator | 2026-03-05 01:07:35 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:35.896224 | orchestrator | 2026-03-05 01:07:35 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:35.897730 | orchestrator | 2026-03-05 01:07:35 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:35.898757 | orchestrator | 2026-03-05 01:07:35 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:35.898890 | orchestrator | 2026-03-05 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:38.950136 | orchestrator | 2026-03-05 01:07:38 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:38.951940 | orchestrator | 2026-03-05 01:07:38 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:38.954171 | orchestrator | 2026-03-05 01:07:38 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state STARTED 2026-03-05 01:07:38.957408 | orchestrator | 2026-03-05 01:07:38 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:38.957519 | orchestrator | 2026-03-05 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:42.007413 | orchestrator | 2026-03-05 01:07:42 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:42.009962 | orchestrator | 2026-03-05 01:07:42 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:42.014395 | orchestrator | 2026-03-05 01:07:42 | INFO  | Task 151530e3-baec-4ce2-942f-dc8621856335 is in state SUCCESS 2026-03-05 01:07:42.016838 | orchestrator | 2026-03-05 01:07:42.016906 | orchestrator | 2026-03-05 01:07:42.016916 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:07:42.016923 | orchestrator | 2026-03-05 01:07:42.016930 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:07:42.016936 | orchestrator | Thursday 05 March 2026 01:04:22 +0000 (0:00:00.259) 0:00:00.259 ******** 2026-03-05 01:07:42.016943 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:07:42.016951 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:07:42.016957 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:07:42.016964 | orchestrator | 2026-03-05 01:07:42.016971 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:07:42.016978 | orchestrator | Thursday 05 March 2026 01:04:22 +0000 (0:00:00.337) 0:00:00.596 ******** 2026-03-05 01:07:42.016985 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-05 01:07:42.016992 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-05 01:07:42.016999 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-05 01:07:42.017005 | orchestrator | 2026-03-05 01:07:42.017012 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-05 01:07:42.017019 | orchestrator | 2026-03-05 01:07:42.017026 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-05 01:07:42.017032 | orchestrator | Thursday 05 March 2026 01:04:22 +0000 (0:00:00.509) 0:00:01.106 ******** 2026-03-05 01:07:42.017061 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:07:42.017069 | orchestrator | 2026-03-05 01:07:42.017075 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-05 01:07:42.017082 | orchestrator | Thursday 05 March 2026 01:04:23 +0000 (0:00:00.622) 0:00:01.729 ******** 2026-03-05 01:07:42.017088 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-05 01:07:42.017095 | orchestrator | 2026-03-05 01:07:42.017102 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-05 01:07:42.017108 | orchestrator | Thursday 05 March 2026 01:04:28 +0000 (0:00:04.526) 0:00:06.255 ******** 2026-03-05 01:07:42.017115 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-05 01:07:42.017122 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-05 01:07:42.017128 | orchestrator | 2026-03-05 01:07:42.017135 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-05 01:07:42.017142 | orchestrator | Thursday 05 March 2026 01:04:36 +0000 (0:00:08.060) 0:00:14.315 ******** 2026-03-05 01:07:42.017149 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:07:42.017156 | orchestrator | 2026-03-05 01:07:42.017163 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-05 01:07:42.017170 | orchestrator | Thursday 05 March 2026 01:04:40 +0000 (0:00:03.931) 0:00:18.247 ******** 2026-03-05 01:07:42.017176 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:07:42.017183 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-05 01:07:42.017190 | orchestrator | 2026-03-05 01:07:42.017197 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-05 01:07:42.017204 | orchestrator | Thursday 05 March 2026 01:04:44 +0000 (0:00:04.468) 0:00:22.716 ******** 2026-03-05 01:07:42.017210 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:07:42.017217 | orchestrator | 2026-03-05 01:07:42.017223 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-05 01:07:42.017229 | orchestrator | Thursday 05 March 2026 01:04:48 +0000 (0:00:03.977) 0:00:26.693 ******** 2026-03-05 01:07:42.017236 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-05 01:07:42.017242 | orchestrator | 2026-03-05 01:07:42.017248 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-05 01:07:42.017263 | orchestrator | Thursday 05 March 2026 01:04:53 +0000 (0:00:04.574) 0:00:31.268 ******** 2026-03-05 01:07:42.017302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.017381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.017394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.017402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017742 | orchestrator | 2026-03-05 01:07:42.017749 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-05 01:07:42.017757 | orchestrator | Thursday 05 March 2026 01:04:56 +0000 (0:00:03.473) 0:00:34.742 ******** 2026-03-05 01:07:42.017763 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:42.017771 | orchestrator | 2026-03-05 01:07:42.017777 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-05 01:07:42.017802 | orchestrator | Thursday 05 March 2026 01:04:56 +0000 (0:00:00.146) 0:00:34.888 ******** 2026-03-05 01:07:42.017809 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:42.017816 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:42.017823 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:42.017829 | orchestrator | 2026-03-05 01:07:42.017836 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-05 01:07:42.017843 | orchestrator | Thursday 05 March 2026 01:04:57 +0000 (0:00:00.352) 0:00:35.240 ******** 2026-03-05 01:07:42.017850 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:07:42.017857 | orchestrator | 2026-03-05 01:07:42.017863 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-05 01:07:42.017869 | orchestrator | Thursday 05 March 2026 01:04:57 +0000 (0:00:00.757) 0:00:35.998 ******** 2026-03-05 01:07:42.017886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.017894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.017901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.017926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.017993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018170 | orchestrator | 2026-03-05 01:07:42.018208 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-05 01:07:42.018217 | orchestrator | Thursday 05 March 2026 01:05:04 +0000 (0:00:06.410) 0:00:42.408 ******** 2026-03-05 01:07:42.018228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.018240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:07:42.018247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018322 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:42.018335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.018346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:07:42.018376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018407 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:42.018414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.018434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:07:42.018442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018473 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:42.018497 | orchestrator | 2026-03-05 01:07:42.018505 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-05 01:07:42.018513 | orchestrator | Thursday 05 March 2026 01:05:05 +0000 (0:00:00.799) 0:00:43.207 ******** 2026-03-05 01:07:42.018524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.018535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:07:42.018547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018575 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:42.018582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.018596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:07:42.018607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018635 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:42.018642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.018659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:07:42.018669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.018697 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:42.018703 | orchestrator | 2026-03-05 01:07:42.018710 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-05 01:07:42.018716 | orchestrator | Thursday 05 March 2026 01:05:07 +0000 (0:00:02.291) 0:00:45.499 ******** 2026-03-05 01:07:42.018726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.018741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.018749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.018756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.018873 | orchestrator | 2026-03-05 01:07:42.018888 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-05 01:07:42.018895 | orchestrator | Thursday 05 March 2026 01:05:15 +0000 (0:00:07.884) 0:00:53.384 ******** 2026-03-05 01:07:42.018901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.019500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.019526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.019532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019654 | orchestrator | 2026-03-05 01:07:42.019661 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-05 01:07:42.019671 | orchestrator | Thursday 05 March 2026 01:05:41 +0000 (0:00:26.487) 0:01:19.871 ******** 2026-03-05 01:07:42.019678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-05 01:07:42.019685 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-05 01:07:42.019691 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-05 01:07:42.019697 | orchestrator | 2026-03-05 01:07:42.019703 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-05 01:07:42.019710 | orchestrator | Thursday 05 March 2026 01:05:49 +0000 (0:00:08.279) 0:01:28.151 ******** 2026-03-05 01:07:42.019716 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-05 01:07:42.019722 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-05 01:07:42.019728 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-05 01:07:42.019734 | orchestrator | 2026-03-05 01:07:42.019740 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-05 01:07:42.019747 | orchestrator | Thursday 05 March 2026 01:05:54 +0000 (0:00:04.193) 0:01:32.344 ******** 2026-03-05 01:07:42.019763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.019770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.019777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.019788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019907 | orchestrator | 2026-03-05 01:07:42.019913 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-05 01:07:42.019919 | orchestrator | Thursday 05 March 2026 01:05:57 +0000 (0:00:03.780) 0:01:36.124 ******** 2026-03-05 01:07:42.019931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.019938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.019945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.019954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.019987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.019994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020106 | orchestrator | 2026-03-05 01:07:42.020112 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-05 01:07:42.020119 | orchestrator | Thursday 05 March 2026 01:06:01 +0000 (0:00:03.696) 0:01:39.821 ******** 2026-03-05 01:07:42.020127 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:42.020133 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:42.020140 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:42.020146 | orchestrator | 2026-03-05 01:07:42.020152 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-05 01:07:42.020159 | orchestrator | Thursday 05 March 2026 01:06:02 +0000 (0:00:00.620) 0:01:40.441 ******** 2026-03-05 01:07:42.020173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.020180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:07:42.020190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020217 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:42.020229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.020236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:07:42.020248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020275 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:42.020288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-05 01:07:42.020295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-05 01:07:42.020305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:07:42.020333 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:42.020339 | orchestrator | 2026-03-05 01:07:42.020345 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-05 01:07:42.020352 | orchestrator | Thursday 05 March 2026 01:06:04 +0000 (0:00:02.066) 0:01:42.507 ******** 2026-03-05 01:07:42.020365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.020376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.020382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-05 01:07:42.020388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:07:42.020495 | orchestrator | 2026-03-05 01:07:42.020502 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-05 01:07:42.020508 | orchestrator | Thursday 05 March 2026 01:06:09 +0000 (0:00:05.418) 0:01:47.926 ******** 2026-03-05 01:07:42.020515 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:42.020522 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:42.020528 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:42.020534 | orchestrator | 2026-03-05 01:07:42.020545 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-05 01:07:42.020551 | orchestrator | Thursday 05 March 2026 01:06:10 +0000 (0:00:00.285) 0:01:48.212 ******** 2026-03-05 01:07:42.020558 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-05 01:07:42.020564 | orchestrator | 2026-03-05 01:07:42.020571 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-05 01:07:42.020577 | orchestrator | Thursday 05 March 2026 01:06:12 +0000 (0:00:02.428) 0:01:50.640 ******** 2026-03-05 01:07:42.020583 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:07:42.020592 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-05 01:07:42.020599 | orchestrator | 2026-03-05 01:07:42.020605 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-05 01:07:42.020615 | orchestrator | Thursday 05 March 2026 01:06:15 +0000 (0:00:02.641) 0:01:53.282 ******** 2026-03-05 01:07:42.020621 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:42.020627 | orchestrator | 2026-03-05 01:07:42.020633 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-05 01:07:42.020640 | orchestrator | Thursday 05 March 2026 01:06:32 +0000 (0:00:17.396) 0:02:10.678 ******** 2026-03-05 01:07:42.020646 | orchestrator | 2026-03-05 01:07:42.020653 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-05 01:07:42.020659 | orchestrator | Thursday 05 March 2026 01:06:32 +0000 (0:00:00.063) 0:02:10.742 ******** 2026-03-05 01:07:42.020665 | orchestrator | 2026-03-05 01:07:42.020671 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-05 01:07:42.020678 | orchestrator | Thursday 05 March 2026 01:06:32 +0000 (0:00:00.073) 0:02:10.816 ******** 2026-03-05 01:07:42.020683 | orchestrator | 2026-03-05 01:07:42.020689 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-05 01:07:42.020696 | orchestrator | Thursday 05 March 2026 01:06:32 +0000 (0:00:00.112) 0:02:10.930 ******** 2026-03-05 01:07:42.020702 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:42.020708 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:07:42.020715 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:07:42.020721 | orchestrator | 2026-03-05 01:07:42.020728 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-05 01:07:42.020734 | orchestrator | Thursday 05 March 2026 01:06:46 +0000 (0:00:13.370) 0:02:24.302 ******** 2026-03-05 01:07:42.020741 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:07:42.020747 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:07:42.020753 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:42.020759 | orchestrator | 2026-03-05 01:07:42.020766 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-05 01:07:42.020772 | orchestrator | Thursday 05 March 2026 01:06:56 +0000 (0:00:10.854) 0:02:35.156 ******** 2026-03-05 01:07:42.020778 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:42.020784 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:07:42.020790 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:07:42.020796 | orchestrator | 2026-03-05 01:07:42.020802 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-05 01:07:42.020808 | orchestrator | Thursday 05 March 2026 01:07:09 +0000 (0:00:12.722) 0:02:47.879 ******** 2026-03-05 01:07:42.020815 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:42.020821 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:07:42.020828 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:07:42.020834 | orchestrator | 2026-03-05 01:07:42.020840 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-05 01:07:42.020847 | orchestrator | Thursday 05 March 2026 01:07:15 +0000 (0:00:06.080) 0:02:53.960 ******** 2026-03-05 01:07:42.020853 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:42.020859 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:07:42.020866 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:07:42.020872 | orchestrator | 2026-03-05 01:07:42.020883 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-05 01:07:42.020890 | orchestrator | Thursday 05 March 2026 01:07:26 +0000 (0:00:10.520) 0:03:04.480 ******** 2026-03-05 01:07:42.020896 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:42.020902 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:07:42.020908 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:07:42.020915 | orchestrator | 2026-03-05 01:07:42.020921 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-05 01:07:42.020927 | orchestrator | Thursday 05 March 2026 01:07:32 +0000 (0:00:05.739) 0:03:10.219 ******** 2026-03-05 01:07:42.020934 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:42.020940 | orchestrator | 2026-03-05 01:07:42.020946 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:07:42.020953 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:07:42.020960 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:07:42.020967 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:07:42.020973 | orchestrator | 2026-03-05 01:07:42.020979 | orchestrator | 2026-03-05 01:07:42.020985 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:07:42.020991 | orchestrator | Thursday 05 March 2026 01:07:41 +0000 (0:00:09.135) 0:03:19.355 ******** 2026-03-05 01:07:42.020997 | orchestrator | =============================================================================== 2026-03-05 01:07:42.021002 | orchestrator | designate : Copying over designate.conf -------------------------------- 26.49s 2026-03-05 01:07:42.021008 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.40s 2026-03-05 01:07:42.021014 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.37s 2026-03-05 01:07:42.021020 | orchestrator | designate : Restart designate-central container ------------------------ 12.72s 2026-03-05 01:07:42.021026 | orchestrator | designate : Restart designate-api container ---------------------------- 10.85s 2026-03-05 01:07:42.021032 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.52s 2026-03-05 01:07:42.021050 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 9.14s 2026-03-05 01:07:42.021057 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.28s 2026-03-05 01:07:42.021064 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 8.06s 2026-03-05 01:07:42.021075 | orchestrator | designate : Copying over config.json files for services ----------------- 7.88s 2026-03-05 01:07:42.021127 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.41s 2026-03-05 01:07:42.021136 | orchestrator | designate : Restart designate-producer container ------------------------ 6.08s 2026-03-05 01:07:42.021143 | orchestrator | designate : Restart designate-worker container -------------------------- 5.74s 2026-03-05 01:07:42.021149 | orchestrator | designate : Check designate containers ---------------------------------- 5.42s 2026-03-05 01:07:42.021155 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.57s 2026-03-05 01:07:42.021161 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.53s 2026-03-05 01:07:42.021168 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.47s 2026-03-05 01:07:42.021174 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.19s 2026-03-05 01:07:42.021181 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.98s 2026-03-05 01:07:42.021189 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.93s 2026-03-05 01:07:42.021196 | orchestrator | 2026-03-05 01:07:42 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:42.021208 | orchestrator | 2026-03-05 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:45.072465 | orchestrator | 2026-03-05 01:07:45 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:45.074773 | orchestrator | 2026-03-05 01:07:45 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:07:45.076217 | orchestrator | 2026-03-05 01:07:45 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:45.078381 | orchestrator | 2026-03-05 01:07:45 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:45.078436 | orchestrator | 2026-03-05 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:48.123464 | orchestrator | 2026-03-05 01:07:48 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state STARTED 2026-03-05 01:07:48.123735 | orchestrator | 2026-03-05 01:07:48 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:07:48.125447 | orchestrator | 2026-03-05 01:07:48 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:48.125844 | orchestrator | 2026-03-05 01:07:48 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:48.126164 | orchestrator | 2026-03-05 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:51.179742 | orchestrator | 2026-03-05 01:07:51 | INFO  | Task b4d7d40c-47ce-411a-9ba8-cb5778b8ffea is in state SUCCESS 2026-03-05 01:07:51.180667 | orchestrator | 2026-03-05 01:07:51.180697 | orchestrator | 2026-03-05 01:07:51.180703 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:07:51.180709 | orchestrator | 2026-03-05 01:07:51.180713 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:07:51.180719 | orchestrator | Thursday 05 March 2026 01:06:40 +0000 (0:00:00.579) 0:00:00.579 ******** 2026-03-05 01:07:51.180723 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:07:51.180728 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:07:51.180732 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:07:51.180737 | orchestrator | 2026-03-05 01:07:51.180741 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:07:51.180745 | orchestrator | Thursday 05 March 2026 01:06:40 +0000 (0:00:00.359) 0:00:00.939 ******** 2026-03-05 01:07:51.180750 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-05 01:07:51.180754 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-05 01:07:51.180758 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-05 01:07:51.180764 | orchestrator | 2026-03-05 01:07:51.180770 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-05 01:07:51.180777 | orchestrator | 2026-03-05 01:07:51.180785 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-05 01:07:51.180793 | orchestrator | Thursday 05 March 2026 01:06:41 +0000 (0:00:00.989) 0:00:01.928 ******** 2026-03-05 01:07:51.180800 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:07:51.180807 | orchestrator | 2026-03-05 01:07:51.180813 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-05 01:07:51.180820 | orchestrator | Thursday 05 March 2026 01:06:42 +0000 (0:00:00.814) 0:00:02.743 ******** 2026-03-05 01:07:51.180826 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-05 01:07:51.180833 | orchestrator | 2026-03-05 01:07:51.180839 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-05 01:07:51.180845 | orchestrator | Thursday 05 March 2026 01:06:46 +0000 (0:00:04.138) 0:00:06.882 ******** 2026-03-05 01:07:51.180852 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-05 01:07:51.180882 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-05 01:07:51.180889 | orchestrator | 2026-03-05 01:07:51.180909 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-05 01:07:51.180915 | orchestrator | Thursday 05 March 2026 01:06:54 +0000 (0:00:07.350) 0:00:14.233 ******** 2026-03-05 01:07:51.180921 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:07:51.180928 | orchestrator | 2026-03-05 01:07:51.180934 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-05 01:07:51.180940 | orchestrator | Thursday 05 March 2026 01:06:58 +0000 (0:00:03.858) 0:00:18.092 ******** 2026-03-05 01:07:51.180947 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:07:51.180954 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-05 01:07:51.180960 | orchestrator | 2026-03-05 01:07:51.180967 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-05 01:07:51.180973 | orchestrator | Thursday 05 March 2026 01:07:02 +0000 (0:00:04.393) 0:00:22.486 ******** 2026-03-05 01:07:51.180979 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:07:51.180986 | orchestrator | 2026-03-05 01:07:51.180993 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-05 01:07:51.181000 | orchestrator | Thursday 05 March 2026 01:07:06 +0000 (0:00:03.616) 0:00:26.102 ******** 2026-03-05 01:07:51.181007 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-05 01:07:51.181013 | orchestrator | 2026-03-05 01:07:51.181020 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-05 01:07:51.181026 | orchestrator | Thursday 05 March 2026 01:07:10 +0000 (0:00:04.502) 0:00:30.605 ******** 2026-03-05 01:07:51.181053 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:51.181059 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:51.181065 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:51.181072 | orchestrator | 2026-03-05 01:07:51.181078 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-05 01:07:51.181084 | orchestrator | Thursday 05 March 2026 01:07:11 +0000 (0:00:00.584) 0:00:31.189 ******** 2026-03-05 01:07:51.181093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181135 | orchestrator | 2026-03-05 01:07:51.181143 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-05 01:07:51.181147 | orchestrator | Thursday 05 March 2026 01:07:12 +0000 (0:00:01.159) 0:00:32.348 ******** 2026-03-05 01:07:51.181151 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:51.181155 | orchestrator | 2026-03-05 01:07:51.181158 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-05 01:07:51.181162 | orchestrator | Thursday 05 March 2026 01:07:12 +0000 (0:00:00.105) 0:00:32.454 ******** 2026-03-05 01:07:51.181166 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:51.181170 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:51.181174 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:51.181177 | orchestrator | 2026-03-05 01:07:51.181181 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-05 01:07:51.181185 | orchestrator | Thursday 05 March 2026 01:07:12 +0000 (0:00:00.460) 0:00:32.914 ******** 2026-03-05 01:07:51.181189 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:07:51.181193 | orchestrator | 2026-03-05 01:07:51.181197 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-05 01:07:51.181201 | orchestrator | Thursday 05 March 2026 01:07:13 +0000 (0:00:00.509) 0:00:33.424 ******** 2026-03-05 01:07:51.181205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181237 | orchestrator | 2026-03-05 01:07:51.181245 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-05 01:07:51.181251 | orchestrator | Thursday 05 March 2026 01:07:15 +0000 (0:00:01.566) 0:00:34.990 ******** 2026-03-05 01:07:51.181261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:07:51.181268 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:51.181275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:07:51.181282 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:51.181294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:07:51.181306 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:51.181311 | orchestrator | 2026-03-05 01:07:51.181315 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-05 01:07:51.181319 | orchestrator | Thursday 05 March 2026 01:07:15 +0000 (0:00:00.761) 0:00:35.751 ******** 2026-03-05 01:07:51.181323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:07:51.181327 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:51.181334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:07:51.181338 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:51.181342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:07:51.181346 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:51.181350 | orchestrator | 2026-03-05 01:07:51.181354 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-05 01:07:51.181358 | orchestrator | Thursday 05 March 2026 01:07:16 +0000 (0:00:00.799) 0:00:36.550 ******** 2026-03-05 01:07:51.181367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181387 | orchestrator | 2026-03-05 01:07:51.181391 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-05 01:07:51.181395 | orchestrator | Thursday 05 March 2026 01:07:18 +0000 (0:00:01.586) 0:00:38.137 ******** 2026-03-05 01:07:51.181398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181422 | orchestrator | 2026-03-05 01:07:51.181430 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-05 01:07:51.181438 | orchestrator | Thursday 05 March 2026 01:07:20 +0000 (0:00:02.342) 0:00:40.480 ******** 2026-03-05 01:07:51.181445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-05 01:07:51.181451 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-05 01:07:51.181458 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-05 01:07:51.181463 | orchestrator | 2026-03-05 01:07:51.181469 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-05 01:07:51.181475 | orchestrator | Thursday 05 March 2026 01:07:21 +0000 (0:00:01.389) 0:00:41.869 ******** 2026-03-05 01:07:51.181482 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:07:51.181488 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:51.181494 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:07:51.181500 | orchestrator | 2026-03-05 01:07:51.181515 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-05 01:07:51.181520 | orchestrator | Thursday 05 March 2026 01:07:23 +0000 (0:00:01.235) 0:00:43.104 ******** 2026-03-05 01:07:51.181524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:07:51.181528 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:07:51.181536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:07:51.181540 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:07:51.181549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-05 01:07:51.181554 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:07:51.181557 | orchestrator | 2026-03-05 01:07:51.181561 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-05 01:07:51.181565 | orchestrator | Thursday 05 March 2026 01:07:23 +0000 (0:00:00.450) 0:00:43.554 ******** 2026-03-05 01:07:51.181569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-05 01:07:51.181588 | orchestrator | 2026-03-05 01:07:51.181592 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-05 01:07:51.181596 | orchestrator | Thursday 05 March 2026 01:07:24 +0000 (0:00:00.995) 0:00:44.550 ******** 2026-03-05 01:07:51.181600 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:51.181604 | orchestrator | 2026-03-05 01:07:51.181608 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-05 01:07:51.181611 | orchestrator | Thursday 05 March 2026 01:07:27 +0000 (0:00:02.470) 0:00:47.020 ******** 2026-03-05 01:07:51.181615 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:51.181619 | orchestrator | 2026-03-05 01:07:51.181623 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-05 01:07:51.181627 | orchestrator | Thursday 05 March 2026 01:07:29 +0000 (0:00:02.412) 0:00:49.433 ******** 2026-03-05 01:07:51.181634 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:51.181638 | orchestrator | 2026-03-05 01:07:51.181642 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-05 01:07:51.181646 | orchestrator | Thursday 05 March 2026 01:07:44 +0000 (0:00:14.732) 0:01:04.165 ******** 2026-03-05 01:07:51.181650 | orchestrator | 2026-03-05 01:07:51.181654 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-05 01:07:51.181657 | orchestrator | Thursday 05 March 2026 01:07:44 +0000 (0:00:00.068) 0:01:04.234 ******** 2026-03-05 01:07:51.181661 | orchestrator | 2026-03-05 01:07:51.181665 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-05 01:07:51.181669 | orchestrator | Thursday 05 March 2026 01:07:44 +0000 (0:00:00.096) 0:01:04.330 ******** 2026-03-05 01:07:51.181673 | orchestrator | 2026-03-05 01:07:51.181677 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-05 01:07:51.181681 | orchestrator | Thursday 05 March 2026 01:07:44 +0000 (0:00:00.082) 0:01:04.412 ******** 2026-03-05 01:07:51.181684 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:07:51.181688 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:07:51.181692 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:07:51.181696 | orchestrator | 2026-03-05 01:07:51.181700 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:07:51.181705 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:07:51.181711 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:07:51.181715 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:07:51.181719 | orchestrator | 2026-03-05 01:07:51.181723 | orchestrator | 2026-03-05 01:07:51.181727 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:07:51.181731 | orchestrator | Thursday 05 March 2026 01:07:49 +0000 (0:00:05.079) 0:01:09.492 ******** 2026-03-05 01:07:51.181740 | orchestrator | =============================================================================== 2026-03-05 01:07:51.181746 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.73s 2026-03-05 01:07:51.181758 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.35s 2026-03-05 01:07:51.181765 | orchestrator | placement : Restart placement-api container ----------------------------- 5.08s 2026-03-05 01:07:51.181771 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.50s 2026-03-05 01:07:51.181777 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.39s 2026-03-05 01:07:51.181784 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.14s 2026-03-05 01:07:51.181789 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.86s 2026-03-05 01:07:51.181795 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.62s 2026-03-05 01:07:51.181801 | orchestrator | placement : Creating placement databases -------------------------------- 2.47s 2026-03-05 01:07:51.181806 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.41s 2026-03-05 01:07:51.181811 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.34s 2026-03-05 01:07:51.181817 | orchestrator | placement : Copying over config.json files for services ----------------- 1.59s 2026-03-05 01:07:51.181823 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.57s 2026-03-05 01:07:51.181829 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.39s 2026-03-05 01:07:51.181835 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.24s 2026-03-05 01:07:51.181841 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.16s 2026-03-05 01:07:51.181847 | orchestrator | placement : Check placement containers ---------------------------------- 1.00s 2026-03-05 01:07:51.181853 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.99s 2026-03-05 01:07:51.181860 | orchestrator | placement : include_tasks ----------------------------------------------- 0.81s 2026-03-05 01:07:51.181866 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.80s 2026-03-05 01:07:51.181874 | orchestrator | 2026-03-05 01:07:51 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:07:51.182317 | orchestrator | 2026-03-05 01:07:51 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:51.182865 | orchestrator | 2026-03-05 01:07:51 | INFO  | Task 2f2502b5-fdcd-41a6-b9d0-9f3c8ef28dc7 is in state STARTED 2026-03-05 01:07:51.183933 | orchestrator | 2026-03-05 01:07:51 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:51.183977 | orchestrator | 2026-03-05 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:54.229166 | orchestrator | 2026-03-05 01:07:54 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:07:54.229294 | orchestrator | 2026-03-05 01:07:54 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:54.230319 | orchestrator | 2026-03-05 01:07:54 | INFO  | Task 2f2502b5-fdcd-41a6-b9d0-9f3c8ef28dc7 is in state STARTED 2026-03-05 01:07:54.231258 | orchestrator | 2026-03-05 01:07:54 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:54.231284 | orchestrator | 2026-03-05 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:07:57.267533 | orchestrator | 2026-03-05 01:07:57 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:07:57.269052 | orchestrator | 2026-03-05 01:07:57 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:07:57.269862 | orchestrator | 2026-03-05 01:07:57 | INFO  | Task 2f2502b5-fdcd-41a6-b9d0-9f3c8ef28dc7 is in state SUCCESS 2026-03-05 01:07:57.271367 | orchestrator | 2026-03-05 01:07:57 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:07:57.271412 | orchestrator | 2026-03-05 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:00.316362 | orchestrator | 2026-03-05 01:08:00 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:00.317960 | orchestrator | 2026-03-05 01:08:00 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:00.320299 | orchestrator | 2026-03-05 01:08:00 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:00.321801 | orchestrator | 2026-03-05 01:08:00 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:00.321908 | orchestrator | 2026-03-05 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:03.372132 | orchestrator | 2026-03-05 01:08:03 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:03.373066 | orchestrator | 2026-03-05 01:08:03 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:03.376433 | orchestrator | 2026-03-05 01:08:03 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:03.377370 | orchestrator | 2026-03-05 01:08:03 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:03.377433 | orchestrator | 2026-03-05 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:06.426094 | orchestrator | 2026-03-05 01:08:06 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:06.427592 | orchestrator | 2026-03-05 01:08:06 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:06.429405 | orchestrator | 2026-03-05 01:08:06 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:06.430300 | orchestrator | 2026-03-05 01:08:06 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:06.430334 | orchestrator | 2026-03-05 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:09.484517 | orchestrator | 2026-03-05 01:08:09 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:09.486446 | orchestrator | 2026-03-05 01:08:09 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:09.488504 | orchestrator | 2026-03-05 01:08:09 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:09.491075 | orchestrator | 2026-03-05 01:08:09 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:09.491118 | orchestrator | 2026-03-05 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:12.535118 | orchestrator | 2026-03-05 01:08:12 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:12.537447 | orchestrator | 2026-03-05 01:08:12 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:12.538232 | orchestrator | 2026-03-05 01:08:12 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:12.539570 | orchestrator | 2026-03-05 01:08:12 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:12.539604 | orchestrator | 2026-03-05 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:15.598130 | orchestrator | 2026-03-05 01:08:15 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:15.598226 | orchestrator | 2026-03-05 01:08:15 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:15.598242 | orchestrator | 2026-03-05 01:08:15 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:15.598252 | orchestrator | 2026-03-05 01:08:15 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:15.598261 | orchestrator | 2026-03-05 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:18.726189 | orchestrator | 2026-03-05 01:08:18 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:18.726708 | orchestrator | 2026-03-05 01:08:18 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:18.728746 | orchestrator | 2026-03-05 01:08:18 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:18.729996 | orchestrator | 2026-03-05 01:08:18 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:18.730141 | orchestrator | 2026-03-05 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:21.770992 | orchestrator | 2026-03-05 01:08:21 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:21.772947 | orchestrator | 2026-03-05 01:08:21 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:21.774213 | orchestrator | 2026-03-05 01:08:21 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:21.774835 | orchestrator | 2026-03-05 01:08:21 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:21.775116 | orchestrator | 2026-03-05 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:24.809415 | orchestrator | 2026-03-05 01:08:24 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:24.809755 | orchestrator | 2026-03-05 01:08:24 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:24.810699 | orchestrator | 2026-03-05 01:08:24 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:24.811501 | orchestrator | 2026-03-05 01:08:24 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:24.811530 | orchestrator | 2026-03-05 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:27.863279 | orchestrator | 2026-03-05 01:08:27 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:27.865476 | orchestrator | 2026-03-05 01:08:27 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:27.867098 | orchestrator | 2026-03-05 01:08:27 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:27.868445 | orchestrator | 2026-03-05 01:08:27 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:27.868509 | orchestrator | 2026-03-05 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:30.916975 | orchestrator | 2026-03-05 01:08:30 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:30.919661 | orchestrator | 2026-03-05 01:08:30 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:30.923996 | orchestrator | 2026-03-05 01:08:30 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:30.925246 | orchestrator | 2026-03-05 01:08:30 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:30.925516 | orchestrator | 2026-03-05 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:34.101862 | orchestrator | 2026-03-05 01:08:33 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:34.101922 | orchestrator | 2026-03-05 01:08:33 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:34.101931 | orchestrator | 2026-03-05 01:08:33 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:34.101938 | orchestrator | 2026-03-05 01:08:33 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:34.101945 | orchestrator | 2026-03-05 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:37.133773 | orchestrator | 2026-03-05 01:08:37 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:37.133837 | orchestrator | 2026-03-05 01:08:37 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:37.133845 | orchestrator | 2026-03-05 01:08:37 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:37.133852 | orchestrator | 2026-03-05 01:08:37 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:37.133859 | orchestrator | 2026-03-05 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:40.085850 | orchestrator | 2026-03-05 01:08:40 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:40.087621 | orchestrator | 2026-03-05 01:08:40 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:40.088492 | orchestrator | 2026-03-05 01:08:40 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:40.089357 | orchestrator | 2026-03-05 01:08:40 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:40.089516 | orchestrator | 2026-03-05 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:43.146516 | orchestrator | 2026-03-05 01:08:43 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:43.150919 | orchestrator | 2026-03-05 01:08:43 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:43.152194 | orchestrator | 2026-03-05 01:08:43 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:43.155045 | orchestrator | 2026-03-05 01:08:43 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:43.155079 | orchestrator | 2026-03-05 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:46.211432 | orchestrator | 2026-03-05 01:08:46 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:46.212747 | orchestrator | 2026-03-05 01:08:46 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:46.213662 | orchestrator | 2026-03-05 01:08:46 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:46.214402 | orchestrator | 2026-03-05 01:08:46 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:46.214418 | orchestrator | 2026-03-05 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:49.290465 | orchestrator | 2026-03-05 01:08:49 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:49.290673 | orchestrator | 2026-03-05 01:08:49 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:49.291540 | orchestrator | 2026-03-05 01:08:49 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:49.293346 | orchestrator | 2026-03-05 01:08:49 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:49.293385 | orchestrator | 2026-03-05 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:52.332449 | orchestrator | 2026-03-05 01:08:52 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:52.332537 | orchestrator | 2026-03-05 01:08:52 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:52.332547 | orchestrator | 2026-03-05 01:08:52 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:52.333344 | orchestrator | 2026-03-05 01:08:52 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:52.333387 | orchestrator | 2026-03-05 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:55.371142 | orchestrator | 2026-03-05 01:08:55 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:55.372458 | orchestrator | 2026-03-05 01:08:55 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:55.420399 | orchestrator | 2026-03-05 01:08:55 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:55.420462 | orchestrator | 2026-03-05 01:08:55 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:55.420472 | orchestrator | 2026-03-05 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:08:58.460788 | orchestrator | 2026-03-05 01:08:58 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:08:58.462253 | orchestrator | 2026-03-05 01:08:58 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:08:58.462317 | orchestrator | 2026-03-05 01:08:58 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:08:58.463667 | orchestrator | 2026-03-05 01:08:58 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:08:58.463717 | orchestrator | 2026-03-05 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:01.501264 | orchestrator | 2026-03-05 01:09:01 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:01.503402 | orchestrator | 2026-03-05 01:09:01 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:01.506204 | orchestrator | 2026-03-05 01:09:01 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:01.507824 | orchestrator | 2026-03-05 01:09:01 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state STARTED 2026-03-05 01:09:01.507875 | orchestrator | 2026-03-05 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:04.547139 | orchestrator | 2026-03-05 01:09:04 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:04.549692 | orchestrator | 2026-03-05 01:09:04 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:04.551968 | orchestrator | 2026-03-05 01:09:04 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:04.556514 | orchestrator | 2026-03-05 01:09:04 | INFO  | Task 0d9dd53f-4181-40fc-92b7-3fb7f574eada is in state SUCCESS 2026-03-05 01:09:04.557957 | orchestrator | 2026-03-05 01:09:04.558068 | orchestrator | 2026-03-05 01:09:04.558088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:09:04.558102 | orchestrator | 2026-03-05 01:09:04.558138 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:09:04.558152 | orchestrator | Thursday 05 March 2026 01:07:54 +0000 (0:00:00.194) 0:00:00.194 ******** 2026-03-05 01:09:04.558165 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:09:04.558178 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:09:04.558191 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:09:04.558287 | orchestrator | 2026-03-05 01:09:04.558298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:09:04.558306 | orchestrator | Thursday 05 March 2026 01:07:54 +0000 (0:00:00.346) 0:00:00.541 ******** 2026-03-05 01:09:04.558314 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-05 01:09:04.558322 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-05 01:09:04.558338 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-05 01:09:04.558346 | orchestrator | 2026-03-05 01:09:04.558354 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-05 01:09:04.558361 | orchestrator | 2026-03-05 01:09:04.558368 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-05 01:09:04.558376 | orchestrator | Thursday 05 March 2026 01:07:55 +0000 (0:00:00.894) 0:00:01.435 ******** 2026-03-05 01:09:04.558383 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:09:04.558390 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:09:04.558397 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:09:04.558405 | orchestrator | 2026-03-05 01:09:04.558412 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:09:04.558420 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:04.558429 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:04.558460 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:04.558467 | orchestrator | 2026-03-05 01:09:04.558475 | orchestrator | 2026-03-05 01:09:04.558482 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:09:04.558490 | orchestrator | Thursday 05 March 2026 01:07:56 +0000 (0:00:00.962) 0:00:02.398 ******** 2026-03-05 01:09:04.558497 | orchestrator | =============================================================================== 2026-03-05 01:09:04.558504 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.96s 2026-03-05 01:09:04.558512 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2026-03-05 01:09:04.558519 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-05 01:09:04.558527 | orchestrator | 2026-03-05 01:09:04.558534 | orchestrator | 2026-03-05 01:09:04.558541 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:09:04.558549 | orchestrator | 2026-03-05 01:09:04.558557 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:09:04.558566 | orchestrator | Thursday 05 March 2026 01:04:22 +0000 (0:00:00.260) 0:00:00.260 ******** 2026-03-05 01:09:04.558576 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:09:04.558585 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:09:04.558594 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:09:04.558603 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:09:04.558611 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:09:04.558620 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:09:04.558629 | orchestrator | 2026-03-05 01:09:04.558638 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:09:04.558647 | orchestrator | Thursday 05 March 2026 01:04:22 +0000 (0:00:00.895) 0:00:01.156 ******** 2026-03-05 01:09:04.558655 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-05 01:09:04.558662 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-05 01:09:04.558670 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-05 01:09:04.558685 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-05 01:09:04.558693 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-05 01:09:04.558700 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-05 01:09:04.558707 | orchestrator | 2026-03-05 01:09:04.558715 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-05 01:09:04.558722 | orchestrator | 2026-03-05 01:09:04.558729 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-05 01:09:04.558736 | orchestrator | Thursday 05 March 2026 01:04:23 +0000 (0:00:00.633) 0:00:01.789 ******** 2026-03-05 01:09:04.558744 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:09:04.558752 | orchestrator | 2026-03-05 01:09:04.558759 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-05 01:09:04.558766 | orchestrator | Thursday 05 March 2026 01:04:24 +0000 (0:00:01.035) 0:00:02.825 ******** 2026-03-05 01:09:04.558774 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:09:04.558781 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:09:04.558788 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:09:04.558795 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:09:04.558803 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:09:04.558810 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:09:04.558817 | orchestrator | 2026-03-05 01:09:04.558824 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-05 01:09:04.558832 | orchestrator | Thursday 05 March 2026 01:04:25 +0000 (0:00:01.253) 0:00:04.078 ******** 2026-03-05 01:09:04.558840 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:09:04.558847 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:09:04.558864 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:09:04.558872 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:09:04.558879 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:09:04.558907 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:09:04.558916 | orchestrator | 2026-03-05 01:09:04.558950 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-05 01:09:04.558957 | orchestrator | Thursday 05 March 2026 01:04:27 +0000 (0:00:01.195) 0:00:05.274 ******** 2026-03-05 01:09:04.558965 | orchestrator | ok: [testbed-node-0] => { 2026-03-05 01:09:04.558973 | orchestrator |  "changed": false, 2026-03-05 01:09:04.558980 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:09:04.558988 | orchestrator | } 2026-03-05 01:09:04.559012 | orchestrator | ok: [testbed-node-1] => { 2026-03-05 01:09:04.559020 | orchestrator |  "changed": false, 2026-03-05 01:09:04.559027 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:09:04.559034 | orchestrator | } 2026-03-05 01:09:04.559042 | orchestrator | ok: [testbed-node-2] => { 2026-03-05 01:09:04.559049 | orchestrator |  "changed": false, 2026-03-05 01:09:04.559056 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:09:04.559064 | orchestrator | } 2026-03-05 01:09:04.559071 | orchestrator | ok: [testbed-node-3] => { 2026-03-05 01:09:04.559082 | orchestrator |  "changed": false, 2026-03-05 01:09:04.559090 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:09:04.559097 | orchestrator | } 2026-03-05 01:09:04.559104 | orchestrator | ok: [testbed-node-4] => { 2026-03-05 01:09:04.559112 | orchestrator |  "changed": false, 2026-03-05 01:09:04.559119 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:09:04.559126 | orchestrator | } 2026-03-05 01:09:04.559134 | orchestrator | ok: [testbed-node-5] => { 2026-03-05 01:09:04.559141 | orchestrator |  "changed": false, 2026-03-05 01:09:04.559148 | orchestrator |  "msg": "All assertions passed" 2026-03-05 01:09:04.559155 | orchestrator | } 2026-03-05 01:09:04.559163 | orchestrator | 2026-03-05 01:09:04.559170 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-05 01:09:04.559177 | orchestrator | Thursday 05 March 2026 01:04:27 +0000 (0:00:00.818) 0:00:06.093 ******** 2026-03-05 01:09:04.559190 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.559198 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.559205 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.559212 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.559219 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.559226 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.559233 | orchestrator | 2026-03-05 01:09:04.559241 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-05 01:09:04.559248 | orchestrator | Thursday 05 March 2026 01:04:28 +0000 (0:00:00.653) 0:00:06.746 ******** 2026-03-05 01:09:04.559255 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-05 01:09:04.559263 | orchestrator | 2026-03-05 01:09:04.559270 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-05 01:09:04.559277 | orchestrator | Thursday 05 March 2026 01:04:32 +0000 (0:00:03.700) 0:00:10.447 ******** 2026-03-05 01:09:04.559284 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-05 01:09:04.559292 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-05 01:09:04.559299 | orchestrator | 2026-03-05 01:09:04.559307 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-05 01:09:04.559315 | orchestrator | Thursday 05 March 2026 01:04:39 +0000 (0:00:07.471) 0:00:17.919 ******** 2026-03-05 01:09:04.559322 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:09:04.559329 | orchestrator | 2026-03-05 01:09:04.559337 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-05 01:09:04.559344 | orchestrator | Thursday 05 March 2026 01:04:43 +0000 (0:00:03.567) 0:00:21.486 ******** 2026-03-05 01:09:04.559351 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:09:04.559359 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-05 01:09:04.559366 | orchestrator | 2026-03-05 01:09:04.559374 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-05 01:09:04.559381 | orchestrator | Thursday 05 March 2026 01:04:47 +0000 (0:00:04.182) 0:00:25.669 ******** 2026-03-05 01:09:04.559388 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:09:04.559396 | orchestrator | 2026-03-05 01:09:04.559403 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-05 01:09:04.559410 | orchestrator | Thursday 05 March 2026 01:04:51 +0000 (0:00:04.140) 0:00:29.810 ******** 2026-03-05 01:09:04.559418 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-05 01:09:04.559425 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-05 01:09:04.559432 | orchestrator | 2026-03-05 01:09:04.559439 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-05 01:09:04.559447 | orchestrator | Thursday 05 March 2026 01:04:59 +0000 (0:00:08.051) 0:00:37.861 ******** 2026-03-05 01:09:04.559454 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.559462 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.559469 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.559476 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.559484 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.559491 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.559498 | orchestrator | 2026-03-05 01:09:04.559505 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-05 01:09:04.559513 | orchestrator | Thursday 05 March 2026 01:05:00 +0000 (0:00:00.880) 0:00:38.742 ******** 2026-03-05 01:09:04.559520 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.559527 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.559535 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.559542 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.559549 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.559561 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.559569 | orchestrator | 2026-03-05 01:09:04.559576 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-05 01:09:04.559584 | orchestrator | Thursday 05 March 2026 01:05:03 +0000 (0:00:03.157) 0:00:41.900 ******** 2026-03-05 01:09:04.559591 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:09:04.559598 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:09:04.559606 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:09:04.559613 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:09:04.559620 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:09:04.559632 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:09:04.559640 | orchestrator | 2026-03-05 01:09:04.559647 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-05 01:09:04.559655 | orchestrator | Thursday 05 March 2026 01:05:06 +0000 (0:00:02.357) 0:00:44.258 ******** 2026-03-05 01:09:04.559662 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.559669 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.559677 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.559684 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.559691 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.559698 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.559706 | orchestrator | 2026-03-05 01:09:04.559713 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-05 01:09:04.559721 | orchestrator | Thursday 05 March 2026 01:05:09 +0000 (0:00:03.214) 0:00:47.472 ******** 2026-03-05 01:09:04.559734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.559747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.559755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.559769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.559785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.559794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.559801 | orchestrator | 2026-03-05 01:09:04.559809 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-05 01:09:04.559817 | orchestrator | Thursday 05 March 2026 01:05:13 +0000 (0:00:03.856) 0:00:51.329 ******** 2026-03-05 01:09:04.559824 | orchestrator | [WARNING]: Skipped 2026-03-05 01:09:04.559832 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-05 01:09:04.559839 | orchestrator | due to this access issue: 2026-03-05 01:09:04.559847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-05 01:09:04.559854 | orchestrator | a directory 2026-03-05 01:09:04.559861 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:09:04.559869 | orchestrator | 2026-03-05 01:09:04.559876 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-05 01:09:04.559884 | orchestrator | Thursday 05 March 2026 01:05:14 +0000 (0:00:00.902) 0:00:52.231 ******** 2026-03-05 01:09:04.559891 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:09:04.559899 | orchestrator | 2026-03-05 01:09:04.559906 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-05 01:09:04.559914 | orchestrator | Thursday 05 March 2026 01:05:15 +0000 (0:00:01.493) 0:00:53.725 ******** 2026-03-05 01:09:04.559926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.559939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.559950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.559958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.559966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.559981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.559988 | orchestrator | 2026-03-05 01:09:04.560061 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-05 01:09:04.560069 | orchestrator | Thursday 05 March 2026 01:05:20 +0000 (0:00:05.301) 0:00:59.027 ******** 2026-03-05 01:09:04.560083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.560091 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.560107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.560115 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.560123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.560136 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.560144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.560152 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.560160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.560167 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.560181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.560189 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.560196 | orchestrator | 2026-03-05 01:09:04.560207 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-05 01:09:04.560215 | orchestrator | Thursday 05 March 2026 01:05:25 +0000 (0:00:04.378) 0:01:03.405 ******** 2026-03-05 01:09:04.560222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.560230 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.560242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.560250 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.560258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.560265 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.560278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.560286 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.560297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.560305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.560317 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.560325 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.560332 | orchestrator | 2026-03-05 01:09:04.560340 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-05 01:09:04.560347 | orchestrator | Thursday 05 March 2026 01:05:29 +0000 (0:00:04.354) 0:01:07.759 ******** 2026-03-05 01:09:04.560355 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.560363 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.560370 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.560377 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.560385 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.560392 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.560399 | orchestrator | 2026-03-05 01:09:04.560406 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-05 01:09:04.560414 | orchestrator | Thursday 05 March 2026 01:05:32 +0000 (0:00:03.402) 0:01:11.162 ******** 2026-03-05 01:09:04.560421 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.560429 | orchestrator | 2026-03-05 01:09:04.560436 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-05 01:09:04.560444 | orchestrator | Thursday 05 March 2026 01:05:33 +0000 (0:00:00.162) 0:01:11.324 ******** 2026-03-05 01:09:04.560451 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.560458 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.560466 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.560473 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.560480 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.560487 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.560495 | orchestrator | 2026-03-05 01:09:04.560502 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-05 01:09:04.560510 | orchestrator | Thursday 05 March 2026 01:05:33 +0000 (0:00:00.627) 0:01:11.951 ******** 2026-03-05 01:09:04.560517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.560525 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.560869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.560889 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.560897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.560905 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.560913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.560921 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.560928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.560936 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.560948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.560956 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.560963 | orchestrator | 2026-03-05 01:09:04.560972 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-05 01:09:04.560985 | orchestrator | Thursday 05 March 2026 01:05:36 +0000 (0:00:03.128) 0:01:15.079 ******** 2026-03-05 01:09:04.561027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.561042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.561056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.561070 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.561089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.561106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.561114 | orchestrator | 2026-03-05 01:09:04.561122 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-05 01:09:04.561130 | orchestrator | Thursday 05 March 2026 01:05:41 +0000 (0:00:04.648) 0:01:19.728 ******** 2026-03-05 01:09:04.561138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.561146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.561153 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.561169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.561202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.561211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.561218 | orchestrator | 2026-03-05 01:09:04.561226 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-05 01:09:04.561233 | orchestrator | Thursday 05 March 2026 01:05:49 +0000 (0:00:08.282) 0:01:28.010 ******** 2026-03-05 01:09:04.561241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.561249 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.561261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.561273 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.561287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.561295 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.561303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.561311 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.561319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.561326 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.561334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.561346 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.561353 | orchestrator | 2026-03-05 01:09:04.561361 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-05 01:09:04.561368 | orchestrator | Thursday 05 March 2026 01:05:53 +0000 (0:00:03.792) 0:01:31.803 ******** 2026-03-05 01:09:04.561376 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.561383 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.561391 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:09:04.561398 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.561405 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:09:04.561412 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:04.561419 | orchestrator | 2026-03-05 01:09:04.561427 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-05 01:09:04.561438 | orchestrator | Thursday 05 March 2026 01:05:56 +0000 (0:00:03.331) 0:01:35.135 ******** 2026-03-05 01:09:04.561448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.561456 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.561464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.561472 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.561479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.561489 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.561499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.561516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.561530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.561539 | orchestrator | 2026-03-05 01:09:04.561548 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-05 01:09:04.561556 | orchestrator | Thursday 05 March 2026 01:06:01 +0000 (0:00:04.706) 0:01:39.841 ******** 2026-03-05 01:09:04.561565 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.561574 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.561582 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.561591 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.561600 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.561609 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.561618 | orchestrator | 2026-03-05 01:09:04.561626 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-05 01:09:04.561635 | orchestrator | Thursday 05 March 2026 01:06:05 +0000 (0:00:03.607) 0:01:43.449 ******** 2026-03-05 01:09:04.561644 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.561653 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.561662 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.561670 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.561679 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.561687 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.561696 | orchestrator | 2026-03-05 01:09:04.561705 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-05 01:09:04.561714 | orchestrator | Thursday 05 March 2026 01:06:09 +0000 (0:00:04.077) 0:01:47.526 ******** 2026-03-05 01:09:04.561722 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.561731 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.561740 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.561750 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.561766 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.561775 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.561783 | orchestrator | 2026-03-05 01:09:04.561792 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-05 01:09:04.561801 | orchestrator | Thursday 05 March 2026 01:06:11 +0000 (0:00:02.292) 0:01:49.819 ******** 2026-03-05 01:09:04.561810 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.561819 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.561827 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.561836 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.561846 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.561853 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.561860 | orchestrator | 2026-03-05 01:09:04.561868 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-05 01:09:04.561875 | orchestrator | Thursday 05 March 2026 01:06:13 +0000 (0:00:02.067) 0:01:51.887 ******** 2026-03-05 01:09:04.561882 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.561890 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.561897 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.561904 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.561911 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.561919 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.561926 | orchestrator | 2026-03-05 01:09:04.561933 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-05 01:09:04.561941 | orchestrator | Thursday 05 March 2026 01:06:15 +0000 (0:00:02.175) 0:01:54.062 ******** 2026-03-05 01:09:04.561948 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.561955 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.561963 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.561970 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.561977 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.561985 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562006 | orchestrator | 2026-03-05 01:09:04.562038 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-05 01:09:04.562047 | orchestrator | Thursday 05 March 2026 01:06:18 +0000 (0:00:02.510) 0:01:56.572 ******** 2026-03-05 01:09:04.562055 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:09:04.562062 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562070 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:09:04.562077 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562084 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:09:04.562092 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562099 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:09:04.562111 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.562118 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:09:04.562129 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.562137 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-05 01:09:04.562144 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562152 | orchestrator | 2026-03-05 01:09:04.562159 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-05 01:09:04.562166 | orchestrator | Thursday 05 March 2026 01:06:21 +0000 (0:00:03.548) 0:02:00.121 ******** 2026-03-05 01:09:04.562178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.562191 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.562206 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.562222 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.562237 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.562249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.562261 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.562269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.562277 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562284 | orchestrator | 2026-03-05 01:09:04.562291 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-05 01:09:04.562299 | orchestrator | Thursday 05 March 2026 01:06:24 +0000 (0:00:02.314) 0:02:02.436 ******** 2026-03-05 01:09:04.562306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.562314 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.562329 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.562451 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.562470 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.562478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.562486 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.562493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.562501 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562508 | orchestrator | 2026-03-05 01:09:04.562516 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-05 01:09:04.562523 | orchestrator | Thursday 05 March 2026 01:06:27 +0000 (0:00:03.053) 0:02:05.490 ******** 2026-03-05 01:09:04.562531 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562538 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562545 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562553 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.562560 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.562567 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562574 | orchestrator | 2026-03-05 01:09:04.562581 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-05 01:09:04.562589 | orchestrator | Thursday 05 March 2026 01:06:29 +0000 (0:00:02.609) 0:02:08.099 ******** 2026-03-05 01:09:04.562596 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562603 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562611 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562618 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:09:04.562625 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:09:04.562637 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:09:04.562644 | orchestrator | 2026-03-05 01:09:04.562651 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-05 01:09:04.562659 | orchestrator | Thursday 05 March 2026 01:06:34 +0000 (0:00:04.666) 0:02:12.765 ******** 2026-03-05 01:09:04.562666 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562674 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562681 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.562688 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562695 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.562703 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562710 | orchestrator | 2026-03-05 01:09:04.562724 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-05 01:09:04.562732 | orchestrator | Thursday 05 March 2026 01:06:38 +0000 (0:00:03.514) 0:02:16.280 ******** 2026-03-05 01:09:04.562739 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562747 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.562754 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562761 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562768 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562776 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.562783 | orchestrator | 2026-03-05 01:09:04.562790 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-05 01:09:04.562798 | orchestrator | Thursday 05 March 2026 01:06:40 +0000 (0:00:02.690) 0:02:18.970 ******** 2026-03-05 01:09:04.562805 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562812 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562823 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562830 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.562838 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.562845 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562852 | orchestrator | 2026-03-05 01:09:04.562859 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-05 01:09:04.562867 | orchestrator | Thursday 05 March 2026 01:06:43 +0000 (0:00:02.577) 0:02:21.547 ******** 2026-03-05 01:09:04.562874 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562881 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562888 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562895 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.562903 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.562910 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562917 | orchestrator | 2026-03-05 01:09:04.562925 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-05 01:09:04.562932 | orchestrator | Thursday 05 March 2026 01:06:45 +0000 (0:00:02.421) 0:02:23.969 ******** 2026-03-05 01:09:04.562939 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.562947 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.562954 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.562961 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.562968 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.562975 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.562983 | orchestrator | 2026-03-05 01:09:04.563003 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-05 01:09:04.563011 | orchestrator | Thursday 05 March 2026 01:06:49 +0000 (0:00:03.305) 0:02:27.274 ******** 2026-03-05 01:09:04.563019 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.563026 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.563033 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.563040 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.563048 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.563055 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.563062 | orchestrator | 2026-03-05 01:09:04.563069 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-05 01:09:04.563081 | orchestrator | Thursday 05 March 2026 01:06:52 +0000 (0:00:02.933) 0:02:30.208 ******** 2026-03-05 01:09:04.563089 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.563096 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.563103 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.563110 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.563117 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.563125 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.563132 | orchestrator | 2026-03-05 01:09:04.563139 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-05 01:09:04.563146 | orchestrator | Thursday 05 March 2026 01:06:54 +0000 (0:00:02.246) 0:02:32.454 ******** 2026-03-05 01:09:04.563154 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:09:04.563161 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.563169 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:09:04.563176 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.563183 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:09:04.563191 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.563198 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:09:04.563205 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.563212 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:09:04.563220 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.563227 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-05 01:09:04.563234 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.563241 | orchestrator | 2026-03-05 01:09:04.563249 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-05 01:09:04.563256 | orchestrator | Thursday 05 March 2026 01:06:56 +0000 (0:00:02.342) 0:02:34.797 ******** 2026-03-05 01:09:04.563269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.563277 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.563289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.563302 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.563310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-05 01:09:04.563317 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.563325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.563332 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.563340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.563347 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.563361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-05 01:09:04.563368 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.563376 | orchestrator | 2026-03-05 01:09:04.563386 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-05 01:09:04.563394 | orchestrator | Thursday 05 March 2026 01:06:59 +0000 (0:00:02.877) 0:02:37.674 ******** 2026-03-05 01:09:04.563401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.563413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.563421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.563433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-05 01:09:04.563444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.563457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-05 01:09:04.563464 | orchestrator | 2026-03-05 01:09:04.563472 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-05 01:09:04.563480 | orchestrator | Thursday 05 March 2026 01:07:03 +0000 (0:00:03.992) 0:02:41.667 ******** 2026-03-05 01:09:04.563487 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:04.563494 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:04.563502 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:04.563509 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:09:04.563516 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:09:04.563523 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:09:04.563531 | orchestrator | 2026-03-05 01:09:04.563538 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-05 01:09:04.563545 | orchestrator | Thursday 05 March 2026 01:07:04 +0000 (0:00:00.563) 0:02:42.230 ******** 2026-03-05 01:09:04.563553 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:04.563560 | orchestrator | 2026-03-05 01:09:04.563568 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-05 01:09:04.563575 | orchestrator | Thursday 05 March 2026 01:07:06 +0000 (0:00:02.446) 0:02:44.677 ******** 2026-03-05 01:09:04.563582 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:04.563590 | orchestrator | 2026-03-05 01:09:04.563597 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-05 01:09:04.563604 | orchestrator | Thursday 05 March 2026 01:07:08 +0000 (0:00:02.443) 0:02:47.121 ******** 2026-03-05 01:09:04.563612 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:04.563619 | orchestrator | 2026-03-05 01:09:04.563626 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:09:04.563634 | orchestrator | Thursday 05 March 2026 01:07:52 +0000 (0:00:43.378) 0:03:30.500 ******** 2026-03-05 01:09:04.563641 | orchestrator | 2026-03-05 01:09:04.563648 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:09:04.563655 | orchestrator | Thursday 05 March 2026 01:07:52 +0000 (0:00:00.075) 0:03:30.575 ******** 2026-03-05 01:09:04.563663 | orchestrator | 2026-03-05 01:09:04.563670 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:09:04.563677 | orchestrator | Thursday 05 March 2026 01:07:52 +0000 (0:00:00.267) 0:03:30.843 ******** 2026-03-05 01:09:04.563684 | orchestrator | 2026-03-05 01:09:04.563701 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:09:04.563709 | orchestrator | Thursday 05 March 2026 01:07:52 +0000 (0:00:00.089) 0:03:30.932 ******** 2026-03-05 01:09:04.563716 | orchestrator | 2026-03-05 01:09:04.563723 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:09:04.563731 | orchestrator | Thursday 05 March 2026 01:07:52 +0000 (0:00:00.069) 0:03:31.002 ******** 2026-03-05 01:09:04.563738 | orchestrator | 2026-03-05 01:09:04.563745 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-05 01:09:04.563752 | orchestrator | Thursday 05 March 2026 01:07:52 +0000 (0:00:00.066) 0:03:31.068 ******** 2026-03-05 01:09:04.563764 | orchestrator | 2026-03-05 01:09:04.563772 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-05 01:09:04.563779 | orchestrator | Thursday 05 March 2026 01:07:52 +0000 (0:00:00.068) 0:03:31.136 ******** 2026-03-05 01:09:04.563786 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:04.563793 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:09:04.563801 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:09:04.563808 | orchestrator | 2026-03-05 01:09:04.563815 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-05 01:09:04.563822 | orchestrator | Thursday 05 March 2026 01:08:13 +0000 (0:00:20.315) 0:03:51.452 ******** 2026-03-05 01:09:04.563830 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:09:04.563841 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:09:04.563849 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:09:04.563856 | orchestrator | 2026-03-05 01:09:04.563863 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:09:04.563877 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 01:09:04.563890 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-05 01:09:04.563907 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-05 01:09:04.563930 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 01:09:04.563944 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 01:09:04.563956 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-05 01:09:04.563969 | orchestrator | 2026-03-05 01:09:04.563981 | orchestrator | 2026-03-05 01:09:04.564007 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:09:04.564021 | orchestrator | Thursday 05 March 2026 01:09:03 +0000 (0:00:50.133) 0:04:41.585 ******** 2026-03-05 01:09:04.564033 | orchestrator | =============================================================================== 2026-03-05 01:09:04.564045 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.13s 2026-03-05 01:09:04.564057 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.38s 2026-03-05 01:09:04.564070 | orchestrator | neutron : Restart neutron-server container ----------------------------- 20.32s 2026-03-05 01:09:04.564081 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.28s 2026-03-05 01:09:04.564093 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.05s 2026-03-05 01:09:04.564106 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.47s 2026-03-05 01:09:04.564119 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.30s 2026-03-05 01:09:04.564132 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.71s 2026-03-05 01:09:04.564144 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.67s 2026-03-05 01:09:04.564157 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.65s 2026-03-05 01:09:04.564168 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.38s 2026-03-05 01:09:04.564176 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.35s 2026-03-05 01:09:04.564183 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.18s 2026-03-05 01:09:04.564190 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.14s 2026-03-05 01:09:04.564208 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 4.08s 2026-03-05 01:09:04.564215 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.99s 2026-03-05 01:09:04.564223 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.86s 2026-03-05 01:09:04.564230 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.79s 2026-03-05 01:09:04.564237 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.70s 2026-03-05 01:09:04.564244 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.61s 2026-03-05 01:09:04.564252 | orchestrator | 2026-03-05 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:07.586789 | orchestrator | 2026-03-05 01:09:07 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:07.587161 | orchestrator | 2026-03-05 01:09:07 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:07.587804 | orchestrator | 2026-03-05 01:09:07 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:07.588463 | orchestrator | 2026-03-05 01:09:07 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:07.588489 | orchestrator | 2026-03-05 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:10.629436 | orchestrator | 2026-03-05 01:09:10 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:10.630194 | orchestrator | 2026-03-05 01:09:10 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:10.631268 | orchestrator | 2026-03-05 01:09:10 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:10.632348 | orchestrator | 2026-03-05 01:09:10 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:10.632469 | orchestrator | 2026-03-05 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:13.683599 | orchestrator | 2026-03-05 01:09:13 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:13.684291 | orchestrator | 2026-03-05 01:09:13 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:13.685572 | orchestrator | 2026-03-05 01:09:13 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:13.686643 | orchestrator | 2026-03-05 01:09:13 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:13.687821 | orchestrator | 2026-03-05 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:16.738454 | orchestrator | 2026-03-05 01:09:16 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:16.740149 | orchestrator | 2026-03-05 01:09:16 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:16.742935 | orchestrator | 2026-03-05 01:09:16 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:16.744550 | orchestrator | 2026-03-05 01:09:16 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:16.744584 | orchestrator | 2026-03-05 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:19.785894 | orchestrator | 2026-03-05 01:09:19 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:19.787227 | orchestrator | 2026-03-05 01:09:19 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:19.789861 | orchestrator | 2026-03-05 01:09:19 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:19.793114 | orchestrator | 2026-03-05 01:09:19 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:19.793157 | orchestrator | 2026-03-05 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:22.837188 | orchestrator | 2026-03-05 01:09:22 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:22.837262 | orchestrator | 2026-03-05 01:09:22 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:22.837267 | orchestrator | 2026-03-05 01:09:22 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:22.837272 | orchestrator | 2026-03-05 01:09:22 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:22.837276 | orchestrator | 2026-03-05 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:25.865328 | orchestrator | 2026-03-05 01:09:25 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:25.866365 | orchestrator | 2026-03-05 01:09:25 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:25.868463 | orchestrator | 2026-03-05 01:09:25 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:25.871075 | orchestrator | 2026-03-05 01:09:25 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:25.871115 | orchestrator | 2026-03-05 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:28.911303 | orchestrator | 2026-03-05 01:09:28 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:28.916395 | orchestrator | 2026-03-05 01:09:28 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:28.919293 | orchestrator | 2026-03-05 01:09:28 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:28.920548 | orchestrator | 2026-03-05 01:09:28 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:28.920686 | orchestrator | 2026-03-05 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:31.986267 | orchestrator | 2026-03-05 01:09:31 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:32.006786 | orchestrator | 2026-03-05 01:09:32 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:32.009485 | orchestrator | 2026-03-05 01:09:32 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:32.011224 | orchestrator | 2026-03-05 01:09:32 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:32.011667 | orchestrator | 2026-03-05 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:35.050000 | orchestrator | 2026-03-05 01:09:35 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:35.050683 | orchestrator | 2026-03-05 01:09:35 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:35.051040 | orchestrator | 2026-03-05 01:09:35 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:35.052072 | orchestrator | 2026-03-05 01:09:35 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:35.052117 | orchestrator | 2026-03-05 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:38.085571 | orchestrator | 2026-03-05 01:09:38 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:38.086887 | orchestrator | 2026-03-05 01:09:38 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:38.088527 | orchestrator | 2026-03-05 01:09:38 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:38.089179 | orchestrator | 2026-03-05 01:09:38 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:38.089234 | orchestrator | 2026-03-05 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:41.128409 | orchestrator | 2026-03-05 01:09:41 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state STARTED 2026-03-05 01:09:41.129703 | orchestrator | 2026-03-05 01:09:41 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:41.130433 | orchestrator | 2026-03-05 01:09:41 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:41.131168 | orchestrator | 2026-03-05 01:09:41 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:41.131209 | orchestrator | 2026-03-05 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:44.193219 | orchestrator | 2026-03-05 01:09:44 | INFO  | Task c8ce780e-3e4d-46b5-a620-c359269fa675 is in state SUCCESS 2026-03-05 01:09:44.194712 | orchestrator | 2026-03-05 01:09:44 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:44.197682 | orchestrator | 2026-03-05 01:09:44 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:44.200899 | orchestrator | 2026-03-05 01:09:44 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:44.200997 | orchestrator | 2026-03-05 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:47.250732 | orchestrator | 2026-03-05 01:09:47 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:09:47.250828 | orchestrator | 2026-03-05 01:09:47 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:47.252476 | orchestrator | 2026-03-05 01:09:47 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state STARTED 2026-03-05 01:09:47.255552 | orchestrator | 2026-03-05 01:09:47 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:47.255624 | orchestrator | 2026-03-05 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:50.319382 | orchestrator | 2026-03-05 01:09:50 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:09:50.319455 | orchestrator | 2026-03-05 01:09:50 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:50.323080 | orchestrator | 2026-03-05 01:09:50 | INFO  | Task 69e81f6d-bab5-4701-ad47-b194819c269a is in state SUCCESS 2026-03-05 01:09:50.324841 | orchestrator | 2026-03-05 01:09:50.324910 | orchestrator | 2026-03-05 01:09:50.324922 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:09:50.324931 | orchestrator | 2026-03-05 01:09:50.324938 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:09:50.324946 | orchestrator | Thursday 05 March 2026 01:09:08 +0000 (0:00:00.313) 0:00:00.313 ******** 2026-03-05 01:09:50.324953 | orchestrator | ok: [testbed-manager] 2026-03-05 01:09:50.324992 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:09:50.325000 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:09:50.325006 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:09:50.325013 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:09:50.325020 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:09:50.325100 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:09:50.325112 | orchestrator | 2026-03-05 01:09:50.325118 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:09:50.325345 | orchestrator | Thursday 05 March 2026 01:09:10 +0000 (0:00:01.634) 0:00:01.948 ******** 2026-03-05 01:09:50.325366 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-05 01:09:50.325374 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-05 01:09:50.325381 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-05 01:09:50.325388 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-05 01:09:50.325395 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-05 01:09:50.325403 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-05 01:09:50.325409 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-05 01:09:50.325416 | orchestrator | 2026-03-05 01:09:50.325423 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-05 01:09:50.325430 | orchestrator | 2026-03-05 01:09:50.325437 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-05 01:09:50.325444 | orchestrator | Thursday 05 March 2026 01:09:11 +0000 (0:00:00.754) 0:00:02.702 ******** 2026-03-05 01:09:50.325466 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:09:50.325475 | orchestrator | 2026-03-05 01:09:50.325482 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-05 01:09:50.325489 | orchestrator | Thursday 05 March 2026 01:09:12 +0000 (0:00:01.674) 0:00:04.376 ******** 2026-03-05 01:09:50.325496 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-05 01:09:50.325502 | orchestrator | 2026-03-05 01:09:50.325509 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-05 01:09:50.325516 | orchestrator | Thursday 05 March 2026 01:09:16 +0000 (0:00:03.637) 0:00:08.014 ******** 2026-03-05 01:09:50.325524 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-05 01:09:50.325534 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-05 01:09:50.325541 | orchestrator | 2026-03-05 01:09:50.325547 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-05 01:09:50.325553 | orchestrator | Thursday 05 March 2026 01:09:23 +0000 (0:00:06.923) 0:00:14.938 ******** 2026-03-05 01:09:50.325560 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-05 01:09:50.325566 | orchestrator | 2026-03-05 01:09:50.325572 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-05 01:09:50.325579 | orchestrator | Thursday 05 March 2026 01:09:26 +0000 (0:00:03.373) 0:00:18.312 ******** 2026-03-05 01:09:50.325586 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:09:50.325593 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-05 01:09:50.325599 | orchestrator | 2026-03-05 01:09:50.325605 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-05 01:09:50.325612 | orchestrator | Thursday 05 March 2026 01:09:30 +0000 (0:00:03.875) 0:00:22.187 ******** 2026-03-05 01:09:50.325619 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-05 01:09:50.325625 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-05 01:09:50.325632 | orchestrator | 2026-03-05 01:09:50.325638 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-05 01:09:50.325645 | orchestrator | Thursday 05 March 2026 01:09:38 +0000 (0:00:07.857) 0:00:30.045 ******** 2026-03-05 01:09:50.325653 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-05 01:09:50.325661 | orchestrator | 2026-03-05 01:09:50.325667 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:09:50.325674 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:50.325691 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:50.325698 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:50.325705 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:50.325711 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:50.325734 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:50.325741 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:09:50.325749 | orchestrator | 2026-03-05 01:09:50.325755 | orchestrator | 2026-03-05 01:09:50.325762 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:09:50.325768 | orchestrator | Thursday 05 March 2026 01:09:43 +0000 (0:00:05.080) 0:00:35.126 ******** 2026-03-05 01:09:50.325776 | orchestrator | =============================================================================== 2026-03-05 01:09:50.325783 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.86s 2026-03-05 01:09:50.325789 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.92s 2026-03-05 01:09:50.325795 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.08s 2026-03-05 01:09:50.325801 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.88s 2026-03-05 01:09:50.325807 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.64s 2026-03-05 01:09:50.325813 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.37s 2026-03-05 01:09:50.325819 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.67s 2026-03-05 01:09:50.325826 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.63s 2026-03-05 01:09:50.325833 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-03-05 01:09:50.325839 | orchestrator | 2026-03-05 01:09:50.325846 | orchestrator | 2026-03-05 01:09:50.325853 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:09:50.325860 | orchestrator | 2026-03-05 01:09:50.325867 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:09:50.325874 | orchestrator | Thursday 05 March 2026 01:07:46 +0000 (0:00:00.273) 0:00:00.273 ******** 2026-03-05 01:09:50.325881 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:09:50.325895 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:09:50.325903 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:09:50.325910 | orchestrator | 2026-03-05 01:09:50.325917 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:09:50.325924 | orchestrator | Thursday 05 March 2026 01:07:46 +0000 (0:00:00.337) 0:00:00.611 ******** 2026-03-05 01:09:50.325931 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-05 01:09:50.325938 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-05 01:09:50.325946 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-05 01:09:50.325953 | orchestrator | 2026-03-05 01:09:50.326072 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-05 01:09:50.326085 | orchestrator | 2026-03-05 01:09:50.326094 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-05 01:09:50.326102 | orchestrator | Thursday 05 March 2026 01:07:46 +0000 (0:00:00.546) 0:00:01.157 ******** 2026-03-05 01:09:50.326110 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:09:50.326128 | orchestrator | 2026-03-05 01:09:50.326136 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-05 01:09:50.326143 | orchestrator | Thursday 05 March 2026 01:07:47 +0000 (0:00:00.565) 0:00:01.722 ******** 2026-03-05 01:09:50.326273 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-05 01:09:50.326282 | orchestrator | 2026-03-05 01:09:50.326289 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-05 01:09:50.326296 | orchestrator | Thursday 05 March 2026 01:07:51 +0000 (0:00:03.784) 0:00:05.507 ******** 2026-03-05 01:09:50.326303 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-05 01:09:50.326310 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-05 01:09:50.326317 | orchestrator | 2026-03-05 01:09:50.326324 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-05 01:09:50.326330 | orchestrator | Thursday 05 March 2026 01:07:58 +0000 (0:00:07.601) 0:00:13.108 ******** 2026-03-05 01:09:50.326337 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:09:50.326343 | orchestrator | 2026-03-05 01:09:50.326349 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-05 01:09:50.326356 | orchestrator | Thursday 05 March 2026 01:08:02 +0000 (0:00:03.536) 0:00:16.645 ******** 2026-03-05 01:09:50.326362 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:09:50.326369 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-05 01:09:50.326375 | orchestrator | 2026-03-05 01:09:50.326381 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-05 01:09:50.326387 | orchestrator | Thursday 05 March 2026 01:08:06 +0000 (0:00:04.326) 0:00:20.972 ******** 2026-03-05 01:09:50.326393 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:09:50.326400 | orchestrator | 2026-03-05 01:09:50.326406 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-05 01:09:50.326413 | orchestrator | Thursday 05 March 2026 01:08:10 +0000 (0:00:03.768) 0:00:24.740 ******** 2026-03-05 01:09:50.326420 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-05 01:09:50.326426 | orchestrator | 2026-03-05 01:09:50.326433 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-05 01:09:50.326440 | orchestrator | Thursday 05 March 2026 01:08:15 +0000 (0:00:04.864) 0:00:29.604 ******** 2026-03-05 01:09:50.326446 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:50.326453 | orchestrator | 2026-03-05 01:09:50.326460 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-05 01:09:50.326483 | orchestrator | Thursday 05 March 2026 01:08:19 +0000 (0:00:03.782) 0:00:33.387 ******** 2026-03-05 01:09:50.326491 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:50.326497 | orchestrator | 2026-03-05 01:09:50.326504 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-05 01:09:50.326527 | orchestrator | Thursday 05 March 2026 01:08:23 +0000 (0:00:04.411) 0:00:37.798 ******** 2026-03-05 01:09:50.326533 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:50.326540 | orchestrator | 2026-03-05 01:09:50.326546 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-05 01:09:50.326554 | orchestrator | Thursday 05 March 2026 01:08:27 +0000 (0:00:04.383) 0:00:42.182 ******** 2026-03-05 01:09:50.326565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.326597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.326605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.326613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.326626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.326634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.326646 | orchestrator | 2026-03-05 01:09:50.326653 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-05 01:09:50.326662 | orchestrator | Thursday 05 March 2026 01:08:29 +0000 (0:00:02.013) 0:00:44.195 ******** 2026-03-05 01:09:50.326668 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:50.326674 | orchestrator | 2026-03-05 01:09:50.326680 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-05 01:09:50.326687 | orchestrator | Thursday 05 March 2026 01:08:30 +0000 (0:00:00.211) 0:00:44.407 ******** 2026-03-05 01:09:50.326694 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:50.326701 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:50.326707 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:50.326714 | orchestrator | 2026-03-05 01:09:50.326720 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-05 01:09:50.326727 | orchestrator | Thursday 05 March 2026 01:08:30 +0000 (0:00:00.724) 0:00:45.132 ******** 2026-03-05 01:09:50.326734 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:09:50.326741 | orchestrator | 2026-03-05 01:09:50.326748 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-05 01:09:50.326755 | orchestrator | Thursday 05 March 2026 01:08:32 +0000 (0:00:01.208) 0:00:46.340 ******** 2026-03-05 01:09:50.326762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.326769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.326783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.326805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.326814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.326821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.326827 | orchestrator | 2026-03-05 01:09:50.326833 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-05 01:09:50.326839 | orchestrator | Thursday 05 March 2026 01:08:35 +0000 (0:00:03.162) 0:00:49.502 ******** 2026-03-05 01:09:50.326846 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:09:50.326852 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:09:50.326858 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:09:50.326865 | orchestrator | 2026-03-05 01:09:50.326871 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-05 01:09:50.326877 | orchestrator | Thursday 05 March 2026 01:08:36 +0000 (0:00:00.993) 0:00:50.496 ******** 2026-03-05 01:09:50.326885 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:09:50.326891 | orchestrator | 2026-03-05 01:09:50.326897 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-05 01:09:50.326903 | orchestrator | Thursday 05 March 2026 01:08:38 +0000 (0:00:02.406) 0:00:52.902 ******** 2026-03-05 01:09:50.326917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.326933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.326940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.326946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.326952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327009 | orchestrator | 2026-03-05 01:09:50.327016 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-05 01:09:50.327022 | orchestrator | Thursday 05 March 2026 01:08:42 +0000 (0:00:04.309) 0:00:57.212 ******** 2026-03-05 01:09:50.327034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:09:50.327057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:09:50.327064 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:50.327072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:09:50.327083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:09:50.327096 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:50.327103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:09:50.327115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:09:50.327122 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:50.327128 | orchestrator | 2026-03-05 01:09:50.327134 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-05 01:09:50.327141 | orchestrator | Thursday 05 March 2026 01:08:44 +0000 (0:00:01.737) 0:00:58.949 ******** 2026-03-05 01:09:50.327148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:09:50.327154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:09:50.327165 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:50.327178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:09:50.327185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:09:50.327191 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:50.327201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:09:50.327208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:09:50.327214 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:50.327225 | orchestrator | 2026-03-05 01:09:50.327231 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-05 01:09:50.327238 | orchestrator | Thursday 05 March 2026 01:08:47 +0000 (0:00:03.265) 0:01:02.214 ******** 2026-03-05 01:09:50.327244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.327255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.327266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.327273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327299 | orchestrator | 2026-03-05 01:09:50.327309 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-05 01:09:50.327316 | orchestrator | Thursday 05 March 2026 01:08:51 +0000 (0:00:03.547) 0:01:05.761 ******** 2026-03-05 01:09:50.327323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.327334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.327341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.327352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327380 | orchestrator | 2026-03-05 01:09:50.327387 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-05 01:09:50.327394 | orchestrator | Thursday 05 March 2026 01:08:57 +0000 (0:00:05.501) 0:01:11.263 ******** 2026-03-05 01:09:50.327404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:09:50.327410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:09:50.327427 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:50.327434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:09:50.327448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:09:50.327456 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:50.327463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-05 01:09:50.327474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:09:50.327487 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:50.327494 | orchestrator | 2026-03-05 01:09:50.327501 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-05 01:09:50.327507 | orchestrator | Thursday 05 March 2026 01:08:57 +0000 (0:00:00.749) 0:01:12.012 ******** 2026-03-05 01:09:50.327514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.327525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.327533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-05 01:09:50.327543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:09:50.327572 | orchestrator | 2026-03-05 01:09:50.327578 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-05 01:09:50.327584 | orchestrator | Thursday 05 March 2026 01:09:00 +0000 (0:00:02.332) 0:01:14.345 ******** 2026-03-05 01:09:50.327591 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:09:50.327597 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:09:50.327604 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:09:50.327611 | orchestrator | 2026-03-05 01:09:50.327618 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-05 01:09:50.327625 | orchestrator | Thursday 05 March 2026 01:09:00 +0000 (0:00:00.358) 0:01:14.703 ******** 2026-03-05 01:09:50.327631 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:50.327638 | orchestrator | 2026-03-05 01:09:50.327644 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-05 01:09:50.327650 | orchestrator | Thursday 05 March 2026 01:09:02 +0000 (0:00:01.883) 0:01:16.587 ******** 2026-03-05 01:09:50.327656 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:50.327663 | orchestrator | 2026-03-05 01:09:50.327669 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-05 01:09:50.327679 | orchestrator | Thursday 05 March 2026 01:09:04 +0000 (0:00:02.103) 0:01:18.691 ******** 2026-03-05 01:09:50.327686 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:50.327691 | orchestrator | 2026-03-05 01:09:50.327697 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-05 01:09:50.327703 | orchestrator | Thursday 05 March 2026 01:09:20 +0000 (0:00:15.799) 0:01:34.490 ******** 2026-03-05 01:09:50.327709 | orchestrator | 2026-03-05 01:09:50.327714 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-05 01:09:50.327719 | orchestrator | Thursday 05 March 2026 01:09:20 +0000 (0:00:00.073) 0:01:34.564 ******** 2026-03-05 01:09:50.327724 | orchestrator | 2026-03-05 01:09:50.327731 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-05 01:09:50.327737 | orchestrator | Thursday 05 March 2026 01:09:20 +0000 (0:00:00.068) 0:01:34.632 ******** 2026-03-05 01:09:50.327742 | orchestrator | 2026-03-05 01:09:50.327748 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-05 01:09:50.327757 | orchestrator | Thursday 05 March 2026 01:09:20 +0000 (0:00:00.080) 0:01:34.713 ******** 2026-03-05 01:09:50.327765 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:50.327774 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:09:50.327781 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:09:50.327788 | orchestrator | 2026-03-05 01:09:50.327801 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-05 01:09:50.327808 | orchestrator | Thursday 05 March 2026 01:09:34 +0000 (0:00:13.693) 0:01:48.406 ******** 2026-03-05 01:09:50.327815 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:09:50.327824 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:09:50.327832 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:09:50.327840 | orchestrator | 2026-03-05 01:09:50.327847 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:09:50.327857 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-05 01:09:50.327870 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:09:50.327879 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:09:50.327888 | orchestrator | 2026-03-05 01:09:50.327895 | orchestrator | 2026-03-05 01:09:50.327903 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:09:50.327909 | orchestrator | Thursday 05 March 2026 01:09:46 +0000 (0:00:12.430) 0:02:00.836 ******** 2026-03-05 01:09:50.327915 | orchestrator | =============================================================================== 2026-03-05 01:09:50.327922 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.80s 2026-03-05 01:09:50.327928 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.69s 2026-03-05 01:09:50.327934 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.43s 2026-03-05 01:09:50.327941 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.60s 2026-03-05 01:09:50.327949 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.50s 2026-03-05 01:09:50.327956 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.86s 2026-03-05 01:09:50.328015 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.41s 2026-03-05 01:09:50.328022 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.38s 2026-03-05 01:09:50.328029 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.33s 2026-03-05 01:09:50.328035 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 4.31s 2026-03-05 01:09:50.328042 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.78s 2026-03-05 01:09:50.328047 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.78s 2026-03-05 01:09:50.328053 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.77s 2026-03-05 01:09:50.328059 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.55s 2026-03-05 01:09:50.328065 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.54s 2026-03-05 01:09:50.328072 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 3.27s 2026-03-05 01:09:50.328078 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.16s 2026-03-05 01:09:50.328287 | orchestrator | magnum : include_tasks -------------------------------------------------- 2.40s 2026-03-05 01:09:50.328297 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.33s 2026-03-05 01:09:50.328304 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.10s 2026-03-05 01:09:50.328311 | orchestrator | 2026-03-05 01:09:50 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:09:50.328328 | orchestrator | 2026-03-05 01:09:50 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:50.328335 | orchestrator | 2026-03-05 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:53.366224 | orchestrator | 2026-03-05 01:09:53 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:09:53.368303 | orchestrator | 2026-03-05 01:09:53 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:53.370629 | orchestrator | 2026-03-05 01:09:53 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:09:53.372744 | orchestrator | 2026-03-05 01:09:53 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:53.373150 | orchestrator | 2026-03-05 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:56.426898 | orchestrator | 2026-03-05 01:09:56 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:09:56.428577 | orchestrator | 2026-03-05 01:09:56 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:56.430364 | orchestrator | 2026-03-05 01:09:56 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:09:56.431780 | orchestrator | 2026-03-05 01:09:56 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:56.431829 | orchestrator | 2026-03-05 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:09:59.482899 | orchestrator | 2026-03-05 01:09:59 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:09:59.484836 | orchestrator | 2026-03-05 01:09:59 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:09:59.487439 | orchestrator | 2026-03-05 01:09:59 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:09:59.490057 | orchestrator | 2026-03-05 01:09:59 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:09:59.490180 | orchestrator | 2026-03-05 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:02.539225 | orchestrator | 2026-03-05 01:10:02 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:02.540747 | orchestrator | 2026-03-05 01:10:02 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:02.542252 | orchestrator | 2026-03-05 01:10:02 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:02.542828 | orchestrator | 2026-03-05 01:10:02 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:02.542868 | orchestrator | 2026-03-05 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:05.583180 | orchestrator | 2026-03-05 01:10:05 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:05.584787 | orchestrator | 2026-03-05 01:10:05 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:05.587153 | orchestrator | 2026-03-05 01:10:05 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:05.590274 | orchestrator | 2026-03-05 01:10:05 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:05.590350 | orchestrator | 2026-03-05 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:08.634793 | orchestrator | 2026-03-05 01:10:08 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:08.637293 | orchestrator | 2026-03-05 01:10:08 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:08.639869 | orchestrator | 2026-03-05 01:10:08 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:08.642122 | orchestrator | 2026-03-05 01:10:08 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:08.642165 | orchestrator | 2026-03-05 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:11.686238 | orchestrator | 2026-03-05 01:10:11 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:11.687624 | orchestrator | 2026-03-05 01:10:11 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:11.688886 | orchestrator | 2026-03-05 01:10:11 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:11.690489 | orchestrator | 2026-03-05 01:10:11 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:11.690523 | orchestrator | 2026-03-05 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:14.741482 | orchestrator | 2026-03-05 01:10:14 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:14.741804 | orchestrator | 2026-03-05 01:10:14 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:14.742720 | orchestrator | 2026-03-05 01:10:14 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:14.743596 | orchestrator | 2026-03-05 01:10:14 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:14.743632 | orchestrator | 2026-03-05 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:17.797226 | orchestrator | 2026-03-05 01:10:17 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:17.798228 | orchestrator | 2026-03-05 01:10:17 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:17.798836 | orchestrator | 2026-03-05 01:10:17 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:17.800341 | orchestrator | 2026-03-05 01:10:17 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:17.800379 | orchestrator | 2026-03-05 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:20.863857 | orchestrator | 2026-03-05 01:10:20 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:20.864105 | orchestrator | 2026-03-05 01:10:20 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:20.864930 | orchestrator | 2026-03-05 01:10:20 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:20.865014 | orchestrator | 2026-03-05 01:10:20 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:20.865026 | orchestrator | 2026-03-05 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:23.904175 | orchestrator | 2026-03-05 01:10:23 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:23.905627 | orchestrator | 2026-03-05 01:10:23 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:23.906082 | orchestrator | 2026-03-05 01:10:23 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:23.907769 | orchestrator | 2026-03-05 01:10:23 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:23.908638 | orchestrator | 2026-03-05 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:26.948189 | orchestrator | 2026-03-05 01:10:26 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:26.948537 | orchestrator | 2026-03-05 01:10:26 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:26.949609 | orchestrator | 2026-03-05 01:10:26 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:26.950390 | orchestrator | 2026-03-05 01:10:26 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:26.950454 | orchestrator | 2026-03-05 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:29.994330 | orchestrator | 2026-03-05 01:10:29 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:29.994422 | orchestrator | 2026-03-05 01:10:29 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:29.995488 | orchestrator | 2026-03-05 01:10:29 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:29.995758 | orchestrator | 2026-03-05 01:10:29 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:29.995826 | orchestrator | 2026-03-05 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:33.097111 | orchestrator | 2026-03-05 01:10:33 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:33.097170 | orchestrator | 2026-03-05 01:10:33 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:33.098450 | orchestrator | 2026-03-05 01:10:33 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:33.101020 | orchestrator | 2026-03-05 01:10:33 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:33.101052 | orchestrator | 2026-03-05 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:36.154226 | orchestrator | 2026-03-05 01:10:36 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:36.154749 | orchestrator | 2026-03-05 01:10:36 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:36.155761 | orchestrator | 2026-03-05 01:10:36 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:36.156664 | orchestrator | 2026-03-05 01:10:36 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:36.158199 | orchestrator | 2026-03-05 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:39.191611 | orchestrator | 2026-03-05 01:10:39 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:39.193880 | orchestrator | 2026-03-05 01:10:39 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:39.193975 | orchestrator | 2026-03-05 01:10:39 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:39.194181 | orchestrator | 2026-03-05 01:10:39 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:39.194236 | orchestrator | 2026-03-05 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:42.237599 | orchestrator | 2026-03-05 01:10:42 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:42.238257 | orchestrator | 2026-03-05 01:10:42 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:42.239351 | orchestrator | 2026-03-05 01:10:42 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:42.240382 | orchestrator | 2026-03-05 01:10:42 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:42.240419 | orchestrator | 2026-03-05 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:45.268198 | orchestrator | 2026-03-05 01:10:45 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:45.269106 | orchestrator | 2026-03-05 01:10:45 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:45.271438 | orchestrator | 2026-03-05 01:10:45 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:45.272603 | orchestrator | 2026-03-05 01:10:45 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:45.272647 | orchestrator | 2026-03-05 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:48.311453 | orchestrator | 2026-03-05 01:10:48 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:48.312063 | orchestrator | 2026-03-05 01:10:48 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:48.313012 | orchestrator | 2026-03-05 01:10:48 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:48.314584 | orchestrator | 2026-03-05 01:10:48 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:48.314681 | orchestrator | 2026-03-05 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:51.398707 | orchestrator | 2026-03-05 01:10:51 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:51.399742 | orchestrator | 2026-03-05 01:10:51 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:51.401591 | orchestrator | 2026-03-05 01:10:51 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:51.402278 | orchestrator | 2026-03-05 01:10:51 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state STARTED 2026-03-05 01:10:51.402308 | orchestrator | 2026-03-05 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:54.437464 | orchestrator | 2026-03-05 01:10:54 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:54.437972 | orchestrator | 2026-03-05 01:10:54 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:54.438987 | orchestrator | 2026-03-05 01:10:54 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:54.439425 | orchestrator | 2026-03-05 01:10:54 | INFO  | Task 4a9e1bd8-b76f-431f-8fa3-8ea461430f6f is in state SUCCESS 2026-03-05 01:10:54.440161 | orchestrator | 2026-03-05 01:10:54 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:10:54.440189 | orchestrator | 2026-03-05 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:10:57.481496 | orchestrator | 2026-03-05 01:10:57 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:10:57.481547 | orchestrator | 2026-03-05 01:10:57 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:10:57.482000 | orchestrator | 2026-03-05 01:10:57 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:10:57.483269 | orchestrator | 2026-03-05 01:10:57 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:10:57.483302 | orchestrator | 2026-03-05 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:00.517222 | orchestrator | 2026-03-05 01:11:00 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:00.517294 | orchestrator | 2026-03-05 01:11:00 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:00.517789 | orchestrator | 2026-03-05 01:11:00 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:00.522123 | orchestrator | 2026-03-05 01:11:00 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:00.522206 | orchestrator | 2026-03-05 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:03.551792 | orchestrator | 2026-03-05 01:11:03 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:03.553991 | orchestrator | 2026-03-05 01:11:03 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:03.555031 | orchestrator | 2026-03-05 01:11:03 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:03.556059 | orchestrator | 2026-03-05 01:11:03 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:03.556118 | orchestrator | 2026-03-05 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:06.651144 | orchestrator | 2026-03-05 01:11:06 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:06.651228 | orchestrator | 2026-03-05 01:11:06 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:06.651234 | orchestrator | 2026-03-05 01:11:06 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:06.651238 | orchestrator | 2026-03-05 01:11:06 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:06.651243 | orchestrator | 2026-03-05 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:09.691997 | orchestrator | 2026-03-05 01:11:09 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:09.692086 | orchestrator | 2026-03-05 01:11:09 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:09.692094 | orchestrator | 2026-03-05 01:11:09 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:09.692098 | orchestrator | 2026-03-05 01:11:09 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:09.692103 | orchestrator | 2026-03-05 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:12.998787 | orchestrator | 2026-03-05 01:11:13 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:13.035302 | orchestrator | 2026-03-05 01:11:13 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:13.035389 | orchestrator | 2026-03-05 01:11:13 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:13.035399 | orchestrator | 2026-03-05 01:11:13 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:13.035407 | orchestrator | 2026-03-05 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:16.063076 | orchestrator | 2026-03-05 01:11:16 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:16.065015 | orchestrator | 2026-03-05 01:11:16 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:16.067454 | orchestrator | 2026-03-05 01:11:16 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:16.069011 | orchestrator | 2026-03-05 01:11:16 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:16.069069 | orchestrator | 2026-03-05 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:19.120217 | orchestrator | 2026-03-05 01:11:19 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:19.121932 | orchestrator | 2026-03-05 01:11:19 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:19.122322 | orchestrator | 2026-03-05 01:11:19 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:19.124351 | orchestrator | 2026-03-05 01:11:19 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:19.124435 | orchestrator | 2026-03-05 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:22.171571 | orchestrator | 2026-03-05 01:11:22 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:22.174134 | orchestrator | 2026-03-05 01:11:22 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:22.176539 | orchestrator | 2026-03-05 01:11:22 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:22.177455 | orchestrator | 2026-03-05 01:11:22 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:22.177490 | orchestrator | 2026-03-05 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:25.299806 | orchestrator | 2026-03-05 01:11:25 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:25.300433 | orchestrator | 2026-03-05 01:11:25 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:25.302483 | orchestrator | 2026-03-05 01:11:25 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:25.303161 | orchestrator | 2026-03-05 01:11:25 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:25.303206 | orchestrator | 2026-03-05 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:28.334170 | orchestrator | 2026-03-05 01:11:28 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:28.334324 | orchestrator | 2026-03-05 01:11:28 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:28.335312 | orchestrator | 2026-03-05 01:11:28 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:28.336130 | orchestrator | 2026-03-05 01:11:28 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:28.336296 | orchestrator | 2026-03-05 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:31.368399 | orchestrator | 2026-03-05 01:11:31 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:31.370347 | orchestrator | 2026-03-05 01:11:31 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state STARTED 2026-03-05 01:11:31.371157 | orchestrator | 2026-03-05 01:11:31 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:31.371995 | orchestrator | 2026-03-05 01:11:31 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:31.372043 | orchestrator | 2026-03-05 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:34.400276 | orchestrator | 2026-03-05 01:11:34 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:34.402594 | orchestrator | 2026-03-05 01:11:34 | INFO  | Task 8eac45df-5912-4345-b24e-d98a8aca7dbc is in state SUCCESS 2026-03-05 01:11:34.404262 | orchestrator | 2026-03-05 01:11:34.404295 | orchestrator | 2026-03-05 01:11:34.404301 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-05 01:11:34.404306 | orchestrator | 2026-03-05 01:11:34.404310 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-05 01:11:34.404327 | orchestrator | Thursday 05 March 2026 01:04:22 +0000 (0:00:00.144) 0:00:00.144 ******** 2026-03-05 01:11:34.404331 | orchestrator | changed: [localhost] 2026-03-05 01:11:34.404336 | orchestrator | 2026-03-05 01:11:34.404340 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-05 01:11:34.404343 | orchestrator | Thursday 05 March 2026 01:04:23 +0000 (0:00:00.997) 0:00:01.141 ******** 2026-03-05 01:11:34.404348 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-05 01:11:34.404353 | orchestrator | 2026-03-05 01:11:34.404360 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-05 01:11:34.404366 | orchestrator | 2026-03-05 01:11:34.404392 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-05 01:11:34.404399 | orchestrator | 2026-03-05 01:11:34.404405 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-05 01:11:34.404411 | orchestrator | 2026-03-05 01:11:34.404418 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-05 01:11:34.404424 | orchestrator | 2026-03-05 01:11:34.404430 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-05 01:11:34.404436 | orchestrator | 2026-03-05 01:11:34.404451 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-05 01:11:34.404458 | orchestrator | 2026-03-05 01:11:34.404464 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-05 01:11:34.404484 | orchestrator | 2026-03-05 01:11:34.404490 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-05 01:11:34.404495 | orchestrator | changed: [localhost] 2026-03-05 01:11:34.404501 | orchestrator | 2026-03-05 01:11:34.404507 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-05 01:11:34.404513 | orchestrator | Thursday 05 March 2026 01:10:36 +0000 (0:06:12.991) 0:06:14.132 ******** 2026-03-05 01:11:34.404518 | orchestrator | changed: [localhost] 2026-03-05 01:11:34.404523 | orchestrator | 2026-03-05 01:11:34.404529 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:11:34.404535 | orchestrator | 2026-03-05 01:11:34.404541 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:11:34.404546 | orchestrator | Thursday 05 March 2026 01:10:51 +0000 (0:00:14.778) 0:06:28.911 ******** 2026-03-05 01:11:34.404552 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:11:34.404558 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:11:34.404563 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:11:34.404569 | orchestrator | 2026-03-05 01:11:34.404596 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:11:34.404603 | orchestrator | Thursday 05 March 2026 01:10:51 +0000 (0:00:00.315) 0:06:29.226 ******** 2026-03-05 01:11:34.404609 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-05 01:11:34.404638 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-05 01:11:34.404646 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-05 01:11:34.404652 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-05 01:11:34.404658 | orchestrator | 2026-03-05 01:11:34.404665 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-05 01:11:34.404672 | orchestrator | skipping: no hosts matched 2026-03-05 01:11:34.404689 | orchestrator | 2026-03-05 01:11:34.404696 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:11:34.404703 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:11:34.404711 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:11:34.404750 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:11:34.404766 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:11:34.404774 | orchestrator | 2026-03-05 01:11:34.404789 | orchestrator | 2026-03-05 01:11:34.404796 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:11:34.404807 | orchestrator | Thursday 05 March 2026 01:10:52 +0000 (0:00:01.214) 0:06:30.440 ******** 2026-03-05 01:11:34.404814 | orchestrator | =============================================================================== 2026-03-05 01:11:34.404820 | orchestrator | Download ironic-agent initramfs --------------------------------------- 372.99s 2026-03-05 01:11:34.404826 | orchestrator | Download ironic-agent kernel ------------------------------------------- 14.78s 2026-03-05 01:11:34.404833 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.21s 2026-03-05 01:11:34.404839 | orchestrator | Ensure the destination directory exists --------------------------------- 1.00s 2026-03-05 01:11:34.404845 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-05 01:11:34.404851 | orchestrator | 2026-03-05 01:11:34.404857 | orchestrator | 2026-03-05 01:11:34.404864 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:11:34.404871 | orchestrator | 2026-03-05 01:11:34.404877 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:11:34.404912 | orchestrator | Thursday 05 March 2026 01:08:02 +0000 (0:00:00.317) 0:00:00.317 ******** 2026-03-05 01:11:34.404921 | orchestrator | ok: [testbed-manager] 2026-03-05 01:11:34.404928 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:11:34.404953 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:11:34.404960 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:11:34.404966 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:11:34.404972 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:11:34.404979 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:11:34.404986 | orchestrator | 2026-03-05 01:11:34.405004 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:11:34.405011 | orchestrator | Thursday 05 March 2026 01:08:02 +0000 (0:00:00.872) 0:00:01.190 ******** 2026-03-05 01:11:34.405017 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-05 01:11:34.405024 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-05 01:11:34.405030 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-05 01:11:34.405048 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-05 01:11:34.405056 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-05 01:11:34.405063 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-05 01:11:34.405070 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-05 01:11:34.405077 | orchestrator | 2026-03-05 01:11:34.405084 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-05 01:11:34.405090 | orchestrator | 2026-03-05 01:11:34.405097 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-05 01:11:34.405103 | orchestrator | Thursday 05 March 2026 01:08:03 +0000 (0:00:00.858) 0:00:02.048 ******** 2026-03-05 01:11:34.405122 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:11:34.405130 | orchestrator | 2026-03-05 01:11:34.405143 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-05 01:11:34.405151 | orchestrator | Thursday 05 March 2026 01:08:05 +0000 (0:00:01.829) 0:00:03.877 ******** 2026-03-05 01:11:34.405161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405191 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 01:11:34.405206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405214 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405282 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405387 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 01:11:34.405396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405444 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405470 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405508 | orchestrator | 2026-03-05 01:11:34.405524 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-05 01:11:34.405531 | orchestrator | Thursday 05 March 2026 01:08:08 +0000 (0:00:03.192) 0:00:07.070 ******** 2026-03-05 01:11:34.405538 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:11:34.405545 | orchestrator | 2026-03-05 01:11:34.405551 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-05 01:11:34.405558 | orchestrator | Thursday 05 March 2026 01:08:10 +0000 (0:00:01.682) 0:00:08.752 ******** 2026-03-05 01:11:34.405564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 01:11:34.405581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405589 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405647 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.405655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.405713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405721 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.405727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.406071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.406092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.406100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.406107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.406119 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 01:11:34.406127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.406140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.406153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.406160 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.406168 | orchestrator | 2026-03-05 01:11:34.406175 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-05 01:11:34.406182 | orchestrator | Thursday 05 March 2026 01:08:16 +0000 (0:00:06.331) 0:00:15.084 ******** 2026-03-05 01:11:34.406189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406276 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 01:11:34.406291 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406302 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406310 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 01:11:34.406318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406325 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.406332 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.406339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406357 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:11:34.406364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406392 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.406399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406446 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.406453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406483 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.406495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406517 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.406524 | orchestrator | 2026-03-05 01:11:34.406531 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-05 01:11:34.406550 | orchestrator | Thursday 05 March 2026 01:08:19 +0000 (0:00:02.601) 0:00:17.686 ******** 2026-03-05 01:11:34.406558 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-05 01:11:34.406569 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406582 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406603 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-05 01:11:34.406634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406642 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406650 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:11:34.406658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406736 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.406742 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.406750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-05 01:11:34.406800 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.406807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406859 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.406866 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.406872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-05 01:11:34.406879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-05 01:11:34.406910 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.406917 | orchestrator | 2026-03-05 01:11:34.406923 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-05 01:11:34.406929 | orchestrator | Thursday 05 March 2026 01:08:22 +0000 (0:00:03.200) 0:00:20.886 ******** 2026-03-05 01:11:34.406936 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 01:11:34.406945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.406951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.406961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.406967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.406974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.406980 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.406993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.406999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407047 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.407122 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 01:11:34.407132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407154 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.407160 | orchestrator | 2026-03-05 01:11:34.407166 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-05 01:11:34.407172 | orchestrator | Thursday 05 March 2026 01:08:30 +0000 (0:00:08.282) 0:00:29.168 ******** 2026-03-05 01:11:34.407178 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:11:34.407184 | orchestrator | 2026-03-05 01:11:34.407190 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-05 01:11:34.407196 | orchestrator | Thursday 05 March 2026 01:08:32 +0000 (0:00:01.665) 0:00:30.834 ******** 2026-03-05 01:11:34.407210 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1324949, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407223 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1324949, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407230 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1324949, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407240 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1324949, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407246 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1324949, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407256 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1324949, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.407262 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1324949, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407269 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1324981, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.095863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407278 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1324981, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.095863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407285 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1324981, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.095863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407296 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1324981, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.095863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407307 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1324981, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.095863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407314 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1324938, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.089856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407321 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1324938, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.089856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407328 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1324981, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.095863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407337 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1324938, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.089856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407344 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1324938, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.089856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407522 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324961, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407539 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324961, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407546 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1324938, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.089856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407552 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324961, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407558 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324961, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407568 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1324933, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0888119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407574 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1324981, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.095863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.407585 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1324938, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.089856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407596 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1324933, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0888119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407603 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324961, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407610 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1324933, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0888119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1324933, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0888119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407628 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1324933, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0888119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407635 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324961, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407651 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324952, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407658 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324952, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407664 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324952, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407671 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324952, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407677 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1324938, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.089856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.407686 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324952, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407693 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1324933, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0888119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407706 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324959, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407713 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324959, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407720 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324959, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407726 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324959, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407733 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324959, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407742 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324953, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0909994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407749 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324952, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407760 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324953, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0909994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407770 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324953, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0909994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407777 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324953, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0909994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407783 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324953, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0909994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407790 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1324946, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0900893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407798 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324959, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407812 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1324946, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0900893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407823 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324961, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.407833 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324975, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0947697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407840 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324953, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0909994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407847 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1324946, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0900893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407854 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1324946, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0900893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407864 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324975, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0947697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407876 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1324946, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0900893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407938 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324928, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.088258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407951 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324975, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0947697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.407959 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1324946, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0900893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408006 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324975, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0947697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408014 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324928, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.088258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408061 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1324996, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0984888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408075 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324975, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0947697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408082 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324975, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0947697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408094 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1324996, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0984888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408101 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1324933, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0888119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.408108 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324928, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.088258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408115 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324928, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.088258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408122 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324928, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.088258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408136 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324928, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.088258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408142 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1324968, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0933464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408616 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1324996, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0984888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408652 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1324968, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0933464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408657 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1324996, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0984888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408662 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324935, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0889761, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408669 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1324996, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0984888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408694 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1324996, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0984888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408702 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1324968, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0933464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408716 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1324968, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0933464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408722 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1324968, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0933464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408729 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1324931, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0885952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408736 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1324968, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0933464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408747 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324935, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0889761, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408756 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324935, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0889761, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408762 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324935, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0889761, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408773 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324952, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.090638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.408781 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324935, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0889761, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408788 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324935, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0889761, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408795 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324956, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0918446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408806 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1324931, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0885952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408812 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1324931, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0885952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408816 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1324931, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0885952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408824 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1324931, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0885952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408828 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1324931, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0885952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408832 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324955, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.091332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408836 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324956, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0918446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408843 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324956, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0918446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408849 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324956, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0918446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408853 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1324995, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0977242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408858 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.408865 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324956, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0918446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408869 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324955, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.091332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408873 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324955, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.091332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408880 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324959, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.092036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.408898 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324956, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0918446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408906 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1324995, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0977242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408910 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.408915 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1324995, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0977242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408919 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.408926 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324955, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.091332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408930 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324955, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.091332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408934 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324955, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.091332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408941 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1324995, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0977242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408945 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.408949 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1324995, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0977242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408953 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.408957 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1324995, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0977242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-05 01:11:34.408961 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.408965 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324953, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0909994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.408971 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1324946, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0900893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.408975 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324975, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0947697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.408982 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324928, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.088258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.408999 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1324996, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0984888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.409004 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1324968, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0933464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.409011 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1324935, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0889761, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.409015 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1324931, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0885952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.409021 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324956, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0918446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.409025 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324955, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.091332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.409032 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1324995, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0977242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-05 01:11:34.409036 | orchestrator | 2026-03-05 01:11:34.409041 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-05 01:11:34.409045 | orchestrator | Thursday 05 March 2026 01:09:05 +0000 (0:00:32.899) 0:01:03.734 ******** 2026-03-05 01:11:34.409049 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:11:34.409053 | orchestrator | 2026-03-05 01:11:34.409056 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-05 01:11:34.409060 | orchestrator | Thursday 05 March 2026 01:09:06 +0000 (0:00:00.734) 0:01:04.468 ******** 2026-03-05 01:11:34.409064 | orchestrator | [WARNING]: Skipped 2026-03-05 01:11:34.409068 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409072 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-05 01:11:34.409076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409080 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-05 01:11:34.409084 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:11:34.409088 | orchestrator | [WARNING]: Skipped 2026-03-05 01:11:34.409092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409095 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-05 01:11:34.409099 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409103 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-05 01:11:34.409107 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:11:34.409111 | orchestrator | [WARNING]: Skipped 2026-03-05 01:11:34.409114 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409120 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-05 01:11:34.409124 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409129 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-05 01:11:34.409132 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-05 01:11:34.409136 | orchestrator | [WARNING]: Skipped 2026-03-05 01:11:34.409140 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409144 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-05 01:11:34.409150 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409156 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-05 01:11:34.409173 | orchestrator | [WARNING]: Skipped 2026-03-05 01:11:34.409180 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409186 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-05 01:11:34.409193 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409199 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-05 01:11:34.409205 | orchestrator | [WARNING]: Skipped 2026-03-05 01:11:34.409218 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409224 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-05 01:11:34.409229 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409235 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-05 01:11:34.409241 | orchestrator | [WARNING]: Skipped 2026-03-05 01:11:34.409248 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409258 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-05 01:11:34.409265 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-05 01:11:34.409271 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-05 01:11:34.409278 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 01:11:34.409285 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-05 01:11:34.409292 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 01:11:34.409299 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 01:11:34.409305 | orchestrator | 2026-03-05 01:11:34.409312 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-05 01:11:34.409321 | orchestrator | Thursday 05 March 2026 01:09:08 +0000 (0:00:02.089) 0:01:06.558 ******** 2026-03-05 01:11:34.409326 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:11:34.409330 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.409335 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:11:34.409341 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.409348 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:11:34.409356 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.409362 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:11:34.409368 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.409374 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:11:34.409380 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.409385 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-05 01:11:34.409392 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.409399 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-05 01:11:34.409405 | orchestrator | 2026-03-05 01:11:34.409412 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-05 01:11:34.409417 | orchestrator | Thursday 05 March 2026 01:09:27 +0000 (0:00:18.810) 0:01:25.368 ******** 2026-03-05 01:11:34.409424 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:11:34.409432 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.409439 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:11:34.409445 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.409452 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:11:34.409459 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.409481 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:11:34.409489 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.409496 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:11:34.409503 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.409509 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-05 01:11:34.409522 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.409529 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-05 01:11:34.409537 | orchestrator | 2026-03-05 01:11:34.409543 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-05 01:11:34.409551 | orchestrator | Thursday 05 March 2026 01:09:29 +0000 (0:00:02.760) 0:01:28.128 ******** 2026-03-05 01:11:34.409556 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:11:34.409562 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:11:34.409566 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:11:34.409571 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.409576 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.409580 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.409585 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:11:34.409595 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.409600 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-05 01:11:34.409604 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:11:34.409609 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.409613 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-05 01:11:34.409618 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.409622 | orchestrator | 2026-03-05 01:11:34.409627 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-05 01:11:34.409635 | orchestrator | Thursday 05 March 2026 01:09:31 +0000 (0:00:02.048) 0:01:30.177 ******** 2026-03-05 01:11:34.409640 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:11:34.409644 | orchestrator | 2026-03-05 01:11:34.409648 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-05 01:11:34.409652 | orchestrator | Thursday 05 March 2026 01:09:32 +0000 (0:00:00.767) 0:01:30.945 ******** 2026-03-05 01:11:34.409656 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:11:34.409660 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.409664 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.409667 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.409671 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.409675 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.409679 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.409683 | orchestrator | 2026-03-05 01:11:34.409686 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-05 01:11:34.409690 | orchestrator | Thursday 05 March 2026 01:09:33 +0000 (0:00:00.737) 0:01:31.682 ******** 2026-03-05 01:11:34.409694 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:11:34.409698 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.409702 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:34.409706 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.409709 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.409713 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:11:34.409717 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:11:34.409721 | orchestrator | 2026-03-05 01:11:34.409730 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-05 01:11:34.409734 | orchestrator | Thursday 05 March 2026 01:09:37 +0000 (0:00:03.787) 0:01:35.470 ******** 2026-03-05 01:11:34.409742 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:11:34.409746 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:11:34.409750 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:11:34.409754 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.409758 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:11:34.409762 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.409766 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:11:34.409769 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.409773 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:11:34.409777 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.409781 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:11:34.409785 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.409789 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-05 01:11:34.409792 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.409796 | orchestrator | 2026-03-05 01:11:34.409800 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-05 01:11:34.409804 | orchestrator | Thursday 05 March 2026 01:09:40 +0000 (0:00:02.945) 0:01:38.416 ******** 2026-03-05 01:11:34.409808 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:11:34.409812 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.409816 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:11:34.409825 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.409828 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:11:34.409832 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.409838 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:11:34.409842 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:11:34.409846 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.409850 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.409854 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-05 01:11:34.409858 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-05 01:11:34.409862 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.409865 | orchestrator | 2026-03-05 01:11:34.409869 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-05 01:11:34.409873 | orchestrator | Thursday 05 March 2026 01:09:42 +0000 (0:00:01.968) 0:01:40.384 ******** 2026-03-05 01:11:34.409877 | orchestrator | [WARNING]: Skipped 2026-03-05 01:11:34.409881 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-05 01:11:34.409900 | orchestrator | due to this access issue: 2026-03-05 01:11:34.409904 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-05 01:11:34.409908 | orchestrator | not a directory 2026-03-05 01:11:34.409912 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-05 01:11:34.409916 | orchestrator | 2026-03-05 01:11:34.409919 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-05 01:11:34.409924 | orchestrator | Thursday 05 March 2026 01:09:43 +0000 (0:00:01.264) 0:01:41.648 ******** 2026-03-05 01:11:34.409927 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:11:34.409934 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.409941 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.409945 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.409949 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.409953 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.409956 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.409960 | orchestrator | 2026-03-05 01:11:34.409964 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-05 01:11:34.409968 | orchestrator | Thursday 05 March 2026 01:09:44 +0000 (0:00:01.005) 0:01:42.654 ******** 2026-03-05 01:11:34.409972 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:11:34.409976 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:11:34.409979 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:11:34.409983 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:11:34.409987 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:11:34.409991 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:11:34.409995 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:11:34.409998 | orchestrator | 2026-03-05 01:11:34.410002 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-05 01:11:34.410006 | orchestrator | Thursday 05 March 2026 01:09:45 +0000 (0:00:00.970) 0:01:43.624 ******** 2026-03-05 01:11:34.410043 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-05 01:11:34.410052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.410056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.410063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.410068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.410079 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.410083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.410091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-05 01:11:34.410095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410114 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410122 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410142 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-05 01:11:34.410147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410185 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-05 01:11:34.410202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-05 01:11:34.410217 | orchestrator | 2026-03-05 01:11:34.410221 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-05 01:11:34.410225 | orchestrator | Thursday 05 March 2026 01:09:50 +0000 (0:00:04.754) 0:01:48.379 ******** 2026-03-05 01:11:34.410229 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-05 01:11:34.410233 | orchestrator | skipping: [testbed-manager] 2026-03-05 01:11:34.410237 | orchestrator | 2026-03-05 01:11:34.410240 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:11:34.410244 | orchestrator | Thursday 05 March 2026 01:09:51 +0000 (0:00:01.381) 0:01:49.761 ******** 2026-03-05 01:11:34.410253 | orchestrator | 2026-03-05 01:11:34.410257 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:11:34.410261 | orchestrator | Thursday 05 March 2026 01:09:51 +0000 (0:00:00.084) 0:01:49.846 ******** 2026-03-05 01:11:34.410265 | orchestrator | 2026-03-05 01:11:34.410269 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:11:34.410274 | orchestrator | Thursday 05 March 2026 01:09:51 +0000 (0:00:00.069) 0:01:49.915 ******** 2026-03-05 01:11:34.410277 | orchestrator | 2026-03-05 01:11:34.410281 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:11:34.410285 | orchestrator | Thursday 05 March 2026 01:09:51 +0000 (0:00:00.068) 0:01:49.984 ******** 2026-03-05 01:11:34.410289 | orchestrator | 2026-03-05 01:11:34.410292 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:11:34.410296 | orchestrator | Thursday 05 March 2026 01:09:52 +0000 (0:00:00.294) 0:01:50.278 ******** 2026-03-05 01:11:34.410300 | orchestrator | 2026-03-05 01:11:34.410304 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:11:34.410307 | orchestrator | Thursday 05 March 2026 01:09:52 +0000 (0:00:00.066) 0:01:50.345 ******** 2026-03-05 01:11:34.410316 | orchestrator | 2026-03-05 01:11:34.410320 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-05 01:11:34.410323 | orchestrator | Thursday 05 March 2026 01:09:52 +0000 (0:00:00.068) 0:01:50.414 ******** 2026-03-05 01:11:34.410330 | orchestrator | 2026-03-05 01:11:34.410334 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-05 01:11:34.410338 | orchestrator | Thursday 05 March 2026 01:09:52 +0000 (0:00:00.091) 0:01:50.505 ******** 2026-03-05 01:11:34.410341 | orchestrator | changed: [testbed-manager] 2026-03-05 01:11:34.410345 | orchestrator | 2026-03-05 01:11:34.410349 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-05 01:11:34.410353 | orchestrator | Thursday 05 March 2026 01:10:13 +0000 (0:00:21.067) 0:02:11.572 ******** 2026-03-05 01:11:34.410357 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:34.410360 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:11:34.410364 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:11:34.410368 | orchestrator | changed: [testbed-manager] 2026-03-05 01:11:34.410372 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:11:34.410375 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:11:34.410379 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:11:34.410383 | orchestrator | 2026-03-05 01:11:34.410387 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-05 01:11:34.410391 | orchestrator | Thursday 05 March 2026 01:10:22 +0000 (0:00:09.436) 0:02:21.009 ******** 2026-03-05 01:11:34.410395 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:11:34.410399 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:11:34.410411 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:34.410415 | orchestrator | 2026-03-05 01:11:34.410418 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-05 01:11:34.410422 | orchestrator | Thursday 05 March 2026 01:10:30 +0000 (0:00:07.956) 0:02:28.965 ******** 2026-03-05 01:11:34.410426 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:11:34.410430 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:11:34.410434 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:34.410437 | orchestrator | 2026-03-05 01:11:34.410441 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-05 01:11:34.410445 | orchestrator | Thursday 05 March 2026 01:10:44 +0000 (0:00:13.325) 0:02:42.290 ******** 2026-03-05 01:11:34.410448 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:11:34.410452 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:34.410456 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:11:34.410460 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:11:34.410463 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:11:34.410467 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:11:34.410471 | orchestrator | changed: [testbed-manager] 2026-03-05 01:11:34.410475 | orchestrator | 2026-03-05 01:11:34.410478 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-05 01:11:34.410482 | orchestrator | Thursday 05 March 2026 01:10:58 +0000 (0:00:14.735) 0:02:57.025 ******** 2026-03-05 01:11:34.410486 | orchestrator | changed: [testbed-manager] 2026-03-05 01:11:34.410494 | orchestrator | 2026-03-05 01:11:34.410498 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-05 01:11:34.410502 | orchestrator | Thursday 05 March 2026 01:11:07 +0000 (0:00:08.896) 0:03:05.922 ******** 2026-03-05 01:11:34.410506 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:11:34.410510 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:11:34.410514 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:11:34.410517 | orchestrator | 2026-03-05 01:11:34.410521 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-05 01:11:34.410528 | orchestrator | Thursday 05 March 2026 01:11:17 +0000 (0:00:09.348) 0:03:15.270 ******** 2026-03-05 01:11:34.410532 | orchestrator | changed: [testbed-manager] 2026-03-05 01:11:34.410535 | orchestrator | 2026-03-05 01:11:34.410539 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-05 01:11:34.410543 | orchestrator | Thursday 05 March 2026 01:11:24 +0000 (0:00:07.099) 0:03:22.370 ******** 2026-03-05 01:11:34.410547 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:11:34.410554 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:11:34.410557 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:11:34.410561 | orchestrator | 2026-03-05 01:11:34.410565 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:11:34.410569 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-05 01:11:34.410573 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:11:34.410577 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:11:34.410585 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:11:34.410589 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:11:34.410593 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:11:34.410597 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:11:34.410601 | orchestrator | 2026-03-05 01:11:34.410604 | orchestrator | 2026-03-05 01:11:34.410608 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:11:34.410612 | orchestrator | Thursday 05 March 2026 01:11:32 +0000 (0:00:07.908) 0:03:30.279 ******** 2026-03-05 01:11:34.410616 | orchestrator | =============================================================================== 2026-03-05 01:11:34.410620 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 32.90s 2026-03-05 01:11:34.410624 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.07s 2026-03-05 01:11:34.410627 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.81s 2026-03-05 01:11:34.410631 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.74s 2026-03-05 01:11:34.410635 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 13.33s 2026-03-05 01:11:34.410639 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 9.44s 2026-03-05 01:11:34.410642 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.35s 2026-03-05 01:11:34.410646 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.90s 2026-03-05 01:11:34.410650 | orchestrator | prometheus : Copying over config.json files ----------------------------- 8.28s 2026-03-05 01:11:34.410654 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 7.96s 2026-03-05 01:11:34.410657 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.91s 2026-03-05 01:11:34.410664 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.10s 2026-03-05 01:11:34.410668 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.33s 2026-03-05 01:11:34.410672 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.75s 2026-03-05 01:11:34.410675 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.79s 2026-03-05 01:11:34.410679 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.20s 2026-03-05 01:11:34.410683 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.19s 2026-03-05 01:11:34.410687 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.95s 2026-03-05 01:11:34.410690 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.76s 2026-03-05 01:11:34.410697 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.60s 2026-03-05 01:11:34.410701 | orchestrator | 2026-03-05 01:11:34 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:11:34.410705 | orchestrator | 2026-03-05 01:11:34 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:34.410709 | orchestrator | 2026-03-05 01:11:34 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:34.410713 | orchestrator | 2026-03-05 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:37.443647 | orchestrator | 2026-03-05 01:11:37 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:37.444692 | orchestrator | 2026-03-05 01:11:37 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:11:37.445167 | orchestrator | 2026-03-05 01:11:37 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:37.446106 | orchestrator | 2026-03-05 01:11:37 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:37.446161 | orchestrator | 2026-03-05 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:40.488171 | orchestrator | 2026-03-05 01:11:40 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:40.488837 | orchestrator | 2026-03-05 01:11:40 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:11:40.490461 | orchestrator | 2026-03-05 01:11:40 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:40.491894 | orchestrator | 2026-03-05 01:11:40 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:40.492103 | orchestrator | 2026-03-05 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:43.525829 | orchestrator | 2026-03-05 01:11:43 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:43.526065 | orchestrator | 2026-03-05 01:11:43 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:11:43.527728 | orchestrator | 2026-03-05 01:11:43 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:43.528698 | orchestrator | 2026-03-05 01:11:43 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:43.528725 | orchestrator | 2026-03-05 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:46.592452 | orchestrator | 2026-03-05 01:11:46 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:46.594189 | orchestrator | 2026-03-05 01:11:46 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:11:46.596827 | orchestrator | 2026-03-05 01:11:46 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:46.600591 | orchestrator | 2026-03-05 01:11:46 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:46.600658 | orchestrator | 2026-03-05 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:49.641380 | orchestrator | 2026-03-05 01:11:49 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:49.644362 | orchestrator | 2026-03-05 01:11:49 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:11:49.646741 | orchestrator | 2026-03-05 01:11:49 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:49.649196 | orchestrator | 2026-03-05 01:11:49 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:49.651710 | orchestrator | 2026-03-05 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:52.687161 | orchestrator | 2026-03-05 01:11:52 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:52.688914 | orchestrator | 2026-03-05 01:11:52 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:11:52.691273 | orchestrator | 2026-03-05 01:11:52 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:52.693157 | orchestrator | 2026-03-05 01:11:52 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:52.693215 | orchestrator | 2026-03-05 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:55.737321 | orchestrator | 2026-03-05 01:11:55 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:55.739182 | orchestrator | 2026-03-05 01:11:55 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:11:55.740225 | orchestrator | 2026-03-05 01:11:55 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:55.742099 | orchestrator | 2026-03-05 01:11:55 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:55.742149 | orchestrator | 2026-03-05 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:11:58.790300 | orchestrator | 2026-03-05 01:11:58 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:11:58.792849 | orchestrator | 2026-03-05 01:11:58 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:11:58.794402 | orchestrator | 2026-03-05 01:11:58 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:11:58.797160 | orchestrator | 2026-03-05 01:11:58 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:11:58.797241 | orchestrator | 2026-03-05 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:01.837954 | orchestrator | 2026-03-05 01:12:01 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:01.838199 | orchestrator | 2026-03-05 01:12:01 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:01.839024 | orchestrator | 2026-03-05 01:12:01 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:01.840023 | orchestrator | 2026-03-05 01:12:01 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:01.840060 | orchestrator | 2026-03-05 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:04.876851 | orchestrator | 2026-03-05 01:12:04 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:04.877078 | orchestrator | 2026-03-05 01:12:04 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:04.878110 | orchestrator | 2026-03-05 01:12:04 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:04.879012 | orchestrator | 2026-03-05 01:12:04 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:04.879051 | orchestrator | 2026-03-05 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:07.913230 | orchestrator | 2026-03-05 01:12:07 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:07.915777 | orchestrator | 2026-03-05 01:12:07 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:07.918007 | orchestrator | 2026-03-05 01:12:07 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:07.920166 | orchestrator | 2026-03-05 01:12:07 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:07.920213 | orchestrator | 2026-03-05 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:10.964496 | orchestrator | 2026-03-05 01:12:10 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:10.964913 | orchestrator | 2026-03-05 01:12:10 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:10.965886 | orchestrator | 2026-03-05 01:12:10 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:10.967069 | orchestrator | 2026-03-05 01:12:10 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:10.967111 | orchestrator | 2026-03-05 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:14.020372 | orchestrator | 2026-03-05 01:12:14 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:14.021868 | orchestrator | 2026-03-05 01:12:14 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:14.023585 | orchestrator | 2026-03-05 01:12:14 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:14.024847 | orchestrator | 2026-03-05 01:12:14 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:14.024904 | orchestrator | 2026-03-05 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:17.063592 | orchestrator | 2026-03-05 01:12:17 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:17.064941 | orchestrator | 2026-03-05 01:12:17 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:17.068807 | orchestrator | 2026-03-05 01:12:17 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:17.071436 | orchestrator | 2026-03-05 01:12:17 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:17.071511 | orchestrator | 2026-03-05 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:20.116445 | orchestrator | 2026-03-05 01:12:20 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:20.118404 | orchestrator | 2026-03-05 01:12:20 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:20.121511 | orchestrator | 2026-03-05 01:12:20 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:20.124538 | orchestrator | 2026-03-05 01:12:20 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:20.124603 | orchestrator | 2026-03-05 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:23.166308 | orchestrator | 2026-03-05 01:12:23 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:23.168586 | orchestrator | 2026-03-05 01:12:23 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:23.170888 | orchestrator | 2026-03-05 01:12:23 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:23.172471 | orchestrator | 2026-03-05 01:12:23 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:23.172498 | orchestrator | 2026-03-05 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:26.223356 | orchestrator | 2026-03-05 01:12:26 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:26.224516 | orchestrator | 2026-03-05 01:12:26 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:26.226686 | orchestrator | 2026-03-05 01:12:26 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:26.229142 | orchestrator | 2026-03-05 01:12:26 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:26.229205 | orchestrator | 2026-03-05 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:29.279350 | orchestrator | 2026-03-05 01:12:29 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:29.283760 | orchestrator | 2026-03-05 01:12:29 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:29.284800 | orchestrator | 2026-03-05 01:12:29 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:29.287257 | orchestrator | 2026-03-05 01:12:29 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:29.287742 | orchestrator | 2026-03-05 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:32.328916 | orchestrator | 2026-03-05 01:12:32 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:32.332147 | orchestrator | 2026-03-05 01:12:32 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:32.333752 | orchestrator | 2026-03-05 01:12:32 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:32.334698 | orchestrator | 2026-03-05 01:12:32 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:32.334735 | orchestrator | 2026-03-05 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:35.405536 | orchestrator | 2026-03-05 01:12:35 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:35.407044 | orchestrator | 2026-03-05 01:12:35 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:35.413678 | orchestrator | 2026-03-05 01:12:35 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:35.414995 | orchestrator | 2026-03-05 01:12:35 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:35.415067 | orchestrator | 2026-03-05 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:38.455377 | orchestrator | 2026-03-05 01:12:38 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:38.457052 | orchestrator | 2026-03-05 01:12:38 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:38.458683 | orchestrator | 2026-03-05 01:12:38 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:38.461479 | orchestrator | 2026-03-05 01:12:38 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:38.461556 | orchestrator | 2026-03-05 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:41.497580 | orchestrator | 2026-03-05 01:12:41 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:41.497749 | orchestrator | 2026-03-05 01:12:41 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:41.498644 | orchestrator | 2026-03-05 01:12:41 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:41.499366 | orchestrator | 2026-03-05 01:12:41 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:41.499410 | orchestrator | 2026-03-05 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:44.532518 | orchestrator | 2026-03-05 01:12:44 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:44.533537 | orchestrator | 2026-03-05 01:12:44 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:44.534359 | orchestrator | 2026-03-05 01:12:44 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:44.535170 | orchestrator | 2026-03-05 01:12:44 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:44.535299 | orchestrator | 2026-03-05 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:47.572386 | orchestrator | 2026-03-05 01:12:47 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:47.574608 | orchestrator | 2026-03-05 01:12:47 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:47.578230 | orchestrator | 2026-03-05 01:12:47 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:47.582243 | orchestrator | 2026-03-05 01:12:47 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:47.582323 | orchestrator | 2026-03-05 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:50.642901 | orchestrator | 2026-03-05 01:12:50 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:50.645596 | orchestrator | 2026-03-05 01:12:50 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:50.647170 | orchestrator | 2026-03-05 01:12:50 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:50.648789 | orchestrator | 2026-03-05 01:12:50 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:50.648904 | orchestrator | 2026-03-05 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:53.684463 | orchestrator | 2026-03-05 01:12:53 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:53.685288 | orchestrator | 2026-03-05 01:12:53 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:53.686725 | orchestrator | 2026-03-05 01:12:53 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:53.688374 | orchestrator | 2026-03-05 01:12:53 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:53.688447 | orchestrator | 2026-03-05 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:56.732947 | orchestrator | 2026-03-05 01:12:56 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:56.733661 | orchestrator | 2026-03-05 01:12:56 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:56.735282 | orchestrator | 2026-03-05 01:12:56 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:56.736881 | orchestrator | 2026-03-05 01:12:56 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:56.736930 | orchestrator | 2026-03-05 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:12:59.782478 | orchestrator | 2026-03-05 01:12:59 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:12:59.784162 | orchestrator | 2026-03-05 01:12:59 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:12:59.785542 | orchestrator | 2026-03-05 01:12:59 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:12:59.786610 | orchestrator | 2026-03-05 01:12:59 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:12:59.786661 | orchestrator | 2026-03-05 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:02.833973 | orchestrator | 2026-03-05 01:13:02 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:13:02.838212 | orchestrator | 2026-03-05 01:13:02 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:02.842752 | orchestrator | 2026-03-05 01:13:02 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:13:02.844317 | orchestrator | 2026-03-05 01:13:02 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:02.844356 | orchestrator | 2026-03-05 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:05.887396 | orchestrator | 2026-03-05 01:13:05 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:13:05.888779 | orchestrator | 2026-03-05 01:13:05 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:05.890605 | orchestrator | 2026-03-05 01:13:05 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:13:05.892390 | orchestrator | 2026-03-05 01:13:05 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:05.892434 | orchestrator | 2026-03-05 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:08.935467 | orchestrator | 2026-03-05 01:13:08 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:13:08.936796 | orchestrator | 2026-03-05 01:13:08 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:08.939209 | orchestrator | 2026-03-05 01:13:08 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:13:08.941703 | orchestrator | 2026-03-05 01:13:08 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:08.941843 | orchestrator | 2026-03-05 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:11.987269 | orchestrator | 2026-03-05 01:13:11 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state STARTED 2026-03-05 01:13:11.988556 | orchestrator | 2026-03-05 01:13:11 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:11.990146 | orchestrator | 2026-03-05 01:13:11 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state STARTED 2026-03-05 01:13:11.991892 | orchestrator | 2026-03-05 01:13:11 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:11.991927 | orchestrator | 2026-03-05 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:15.049495 | orchestrator | 2026-03-05 01:13:15 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:15.052520 | orchestrator | 2026-03-05 01:13:15 | INFO  | Task de1d6a3e-158d-4dba-bf21-e9b0172ba635 is in state SUCCESS 2026-03-05 01:13:15.053837 | orchestrator | 2026-03-05 01:13:15.053892 | orchestrator | 2026-03-05 01:13:15.053911 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:13:15.053952 | orchestrator | 2026-03-05 01:13:15.053966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:13:15.053979 | orchestrator | Thursday 05 March 2026 01:09:49 +0000 (0:00:00.262) 0:00:00.262 ******** 2026-03-05 01:13:15.053990 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:13:15.054003 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:13:15.054120 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:13:15.054135 | orchestrator | 2026-03-05 01:13:15.054142 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:13:15.054150 | orchestrator | Thursday 05 March 2026 01:09:49 +0000 (0:00:00.303) 0:00:00.565 ******** 2026-03-05 01:13:15.054157 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-05 01:13:15.054165 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-05 01:13:15.054173 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-05 01:13:15.054180 | orchestrator | 2026-03-05 01:13:15.054187 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-05 01:13:15.054195 | orchestrator | 2026-03-05 01:13:15.054202 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-05 01:13:15.054209 | orchestrator | Thursday 05 March 2026 01:09:50 +0000 (0:00:00.505) 0:00:01.071 ******** 2026-03-05 01:13:15.054217 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:13:15.054224 | orchestrator | 2026-03-05 01:13:15.054232 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-05 01:13:15.054239 | orchestrator | Thursday 05 March 2026 01:09:50 +0000 (0:00:00.679) 0:00:01.751 ******** 2026-03-05 01:13:15.054247 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-05 01:13:15.054254 | orchestrator | 2026-03-05 01:13:15.054261 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-05 01:13:15.054269 | orchestrator | Thursday 05 March 2026 01:09:54 +0000 (0:00:03.387) 0:00:05.138 ******** 2026-03-05 01:13:15.054276 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-05 01:13:15.054284 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-05 01:13:15.054291 | orchestrator | 2026-03-05 01:13:15.054298 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-05 01:13:15.054306 | orchestrator | Thursday 05 March 2026 01:10:01 +0000 (0:00:07.181) 0:00:12.320 ******** 2026-03-05 01:13:15.054314 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:13:15.054322 | orchestrator | 2026-03-05 01:13:15.054351 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-05 01:13:15.054359 | orchestrator | Thursday 05 March 2026 01:10:05 +0000 (0:00:03.789) 0:00:16.110 ******** 2026-03-05 01:13:15.054367 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:13:15.054374 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-05 01:13:15.054381 | orchestrator | 2026-03-05 01:13:15.054389 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-05 01:13:15.054396 | orchestrator | Thursday 05 March 2026 01:10:09 +0000 (0:00:04.234) 0:00:20.345 ******** 2026-03-05 01:13:15.054403 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:13:15.054412 | orchestrator | 2026-03-05 01:13:15.054421 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-05 01:13:15.054430 | orchestrator | Thursday 05 March 2026 01:10:13 +0000 (0:00:03.795) 0:00:24.140 ******** 2026-03-05 01:13:15.054438 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-05 01:13:15.054447 | orchestrator | 2026-03-05 01:13:15.054456 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-05 01:13:15.054464 | orchestrator | Thursday 05 March 2026 01:10:18 +0000 (0:00:04.961) 0:00:29.102 ******** 2026-03-05 01:13:15.054495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.054515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.054525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.054540 | orchestrator | 2026-03-05 01:13:15.054548 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-05 01:13:15.054557 | orchestrator | Thursday 05 March 2026 01:10:22 +0000 (0:00:04.339) 0:00:33.441 ******** 2026-03-05 01:13:15.054566 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:13:15.054576 | orchestrator | 2026-03-05 01:13:15.054589 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-05 01:13:15.054598 | orchestrator | Thursday 05 March 2026 01:10:23 +0000 (0:00:00.927) 0:00:34.369 ******** 2026-03-05 01:13:15.054615 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:13:15.054624 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:13:15.054632 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.054640 | orchestrator | 2026-03-05 01:13:15.054647 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-05 01:13:15.054655 | orchestrator | Thursday 05 March 2026 01:10:29 +0000 (0:00:06.037) 0:00:40.406 ******** 2026-03-05 01:13:15.054663 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:13:15.054670 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:13:15.054687 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:13:15.054695 | orchestrator | 2026-03-05 01:13:15.054703 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-05 01:13:15.054710 | orchestrator | Thursday 05 March 2026 01:10:32 +0000 (0:00:02.664) 0:00:43.070 ******** 2026-03-05 01:13:15.054723 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:13:15.054735 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:13:15.054748 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:13:15.054760 | orchestrator | 2026-03-05 01:13:15.054773 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-05 01:13:15.054784 | orchestrator | Thursday 05 March 2026 01:10:34 +0000 (0:00:02.186) 0:00:45.256 ******** 2026-03-05 01:13:15.054792 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:13:15.054817 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:13:15.054825 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:13:15.054833 | orchestrator | 2026-03-05 01:13:15.054840 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-05 01:13:15.054847 | orchestrator | Thursday 05 March 2026 01:10:35 +0000 (0:00:01.372) 0:00:46.629 ******** 2026-03-05 01:13:15.054855 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.054862 | orchestrator | 2026-03-05 01:13:15.054870 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-05 01:13:15.054877 | orchestrator | Thursday 05 March 2026 01:10:36 +0000 (0:00:00.145) 0:00:46.774 ******** 2026-03-05 01:13:15.054885 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.054892 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.054900 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.054907 | orchestrator | 2026-03-05 01:13:15.054921 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-05 01:13:15.054928 | orchestrator | Thursday 05 March 2026 01:10:36 +0000 (0:00:00.404) 0:00:47.179 ******** 2026-03-05 01:13:15.054936 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:13:15.054944 | orchestrator | 2026-03-05 01:13:15.054951 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-05 01:13:15.054959 | orchestrator | Thursday 05 March 2026 01:10:37 +0000 (0:00:00.756) 0:00:47.936 ******** 2026-03-05 01:13:15.054974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.054989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.054998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.055009 | orchestrator | 2026-03-05 01:13:15.055017 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-05 01:13:15.055024 | orchestrator | Thursday 05 March 2026 01:10:42 +0000 (0:00:04.980) 0:00:52.917 ******** 2026-03-05 01:13:15.055042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:13:15.055052 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:13:15.055072 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:13:15.055094 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055101 | orchestrator | 2026-03-05 01:13:15.055109 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-05 01:13:15.055119 | orchestrator | Thursday 05 March 2026 01:10:47 +0000 (0:00:05.743) 0:00:58.660 ******** 2026-03-05 01:13:15.055128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:13:15.055141 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:13:15.055162 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-05 01:13:15.055185 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055193 | orchestrator | 2026-03-05 01:13:15.055200 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-05 01:13:15.055208 | orchestrator | Thursday 05 March 2026 01:10:54 +0000 (0:00:06.282) 0:01:04.943 ******** 2026-03-05 01:13:15.055215 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055223 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055230 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055238 | orchestrator | 2026-03-05 01:13:15.055245 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-05 01:13:15.055253 | orchestrator | Thursday 05 March 2026 01:10:59 +0000 (0:00:05.120) 0:01:10.064 ******** 2026-03-05 01:13:15.055261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.055278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.055297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.055305 | orchestrator | 2026-03-05 01:13:15.055313 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-05 01:13:15.055320 | orchestrator | Thursday 05 March 2026 01:11:06 +0000 (0:00:06.993) 0:01:17.057 ******** 2026-03-05 01:13:15.055327 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:13:15.055335 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.055342 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:13:15.055349 | orchestrator | 2026-03-05 01:13:15.055357 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-05 01:13:15.055364 | orchestrator | Thursday 05 March 2026 01:11:17 +0000 (0:00:11.195) 0:01:28.253 ******** 2026-03-05 01:13:15.055371 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055379 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055386 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055393 | orchestrator | 2026-03-05 01:13:15.055400 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-05 01:13:15.055408 | orchestrator | Thursday 05 March 2026 01:11:24 +0000 (0:00:06.830) 0:01:35.084 ******** 2026-03-05 01:13:15.055415 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055563 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055575 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055582 | orchestrator | 2026-03-05 01:13:15.055590 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-05 01:13:15.055598 | orchestrator | Thursday 05 March 2026 01:11:29 +0000 (0:00:05.368) 0:01:40.453 ******** 2026-03-05 01:13:15.055610 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055618 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055625 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055632 | orchestrator | 2026-03-05 01:13:15.055639 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-05 01:13:15.055647 | orchestrator | Thursday 05 March 2026 01:11:34 +0000 (0:00:04.531) 0:01:44.984 ******** 2026-03-05 01:13:15.055654 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055662 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055669 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055676 | orchestrator | 2026-03-05 01:13:15.055683 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-05 01:13:15.055694 | orchestrator | Thursday 05 March 2026 01:11:38 +0000 (0:00:04.150) 0:01:49.135 ******** 2026-03-05 01:13:15.055702 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055709 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055717 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055724 | orchestrator | 2026-03-05 01:13:15.055731 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-05 01:13:15.055738 | orchestrator | Thursday 05 March 2026 01:11:38 +0000 (0:00:00.361) 0:01:49.497 ******** 2026-03-05 01:13:15.055746 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-05 01:13:15.055754 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055761 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-05 01:13:15.055768 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055776 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-05 01:13:15.055783 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055790 | orchestrator | 2026-03-05 01:13:15.055819 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-05 01:13:15.055827 | orchestrator | Thursday 05 March 2026 01:11:42 +0000 (0:00:03.681) 0:01:53.179 ******** 2026-03-05 01:13:15.055835 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.055842 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:13:15.055850 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:13:15.055857 | orchestrator | 2026-03-05 01:13:15.055864 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-05 01:13:15.055871 | orchestrator | Thursday 05 March 2026 01:11:47 +0000 (0:00:05.085) 0:01:58.264 ******** 2026-03-05 01:13:15.055880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.055904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.055914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-05 01:13:15.055922 | orchestrator | 2026-03-05 01:13:15.055930 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-05 01:13:15.055937 | orchestrator | Thursday 05 March 2026 01:11:51 +0000 (0:00:03.527) 0:02:01.791 ******** 2026-03-05 01:13:15.055944 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.055952 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.055964 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.055971 | orchestrator | 2026-03-05 01:13:15.055979 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-05 01:13:15.055986 | orchestrator | Thursday 05 March 2026 01:11:51 +0000 (0:00:00.317) 0:02:02.109 ******** 2026-03-05 01:13:15.055994 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.056001 | orchestrator | 2026-03-05 01:13:15.056008 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-05 01:13:15.056016 | orchestrator | Thursday 05 March 2026 01:11:53 +0000 (0:00:02.439) 0:02:04.548 ******** 2026-03-05 01:13:15.056024 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.056036 | orchestrator | 2026-03-05 01:13:15.056049 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-05 01:13:15.056057 | orchestrator | Thursday 05 March 2026 01:11:56 +0000 (0:00:02.731) 0:02:07.280 ******** 2026-03-05 01:13:15.056064 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.056072 | orchestrator | 2026-03-05 01:13:15.056079 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-05 01:13:15.056086 | orchestrator | Thursday 05 March 2026 01:11:59 +0000 (0:00:02.590) 0:02:09.871 ******** 2026-03-05 01:13:15.056094 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.056101 | orchestrator | 2026-03-05 01:13:15.056109 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-05 01:13:15.056120 | orchestrator | Thursday 05 March 2026 01:12:30 +0000 (0:00:31.844) 0:02:41.715 ******** 2026-03-05 01:13:15.056128 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.056135 | orchestrator | 2026-03-05 01:13:15.056143 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-05 01:13:15.056150 | orchestrator | Thursday 05 March 2026 01:12:33 +0000 (0:00:02.245) 0:02:43.961 ******** 2026-03-05 01:13:15.056157 | orchestrator | 2026-03-05 01:13:15.056164 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-05 01:13:15.056172 | orchestrator | Thursday 05 March 2026 01:12:33 +0000 (0:00:00.309) 0:02:44.270 ******** 2026-03-05 01:13:15.056179 | orchestrator | 2026-03-05 01:13:15.056188 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-05 01:13:15.056197 | orchestrator | Thursday 05 March 2026 01:12:33 +0000 (0:00:00.072) 0:02:44.343 ******** 2026-03-05 01:13:15.056206 | orchestrator | 2026-03-05 01:13:15.056215 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-05 01:13:15.056228 | orchestrator | Thursday 05 March 2026 01:12:33 +0000 (0:00:00.075) 0:02:44.419 ******** 2026-03-05 01:13:15.056237 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.056246 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:13:15.056255 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:13:15.056264 | orchestrator | 2026-03-05 01:13:15.056273 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:13:15.056283 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-05 01:13:15.056292 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:13:15.056302 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-05 01:13:15.056311 | orchestrator | 2026-03-05 01:13:15.056319 | orchestrator | 2026-03-05 01:13:15.056328 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:13:15.056336 | orchestrator | Thursday 05 March 2026 01:13:12 +0000 (0:00:38.784) 0:03:23.203 ******** 2026-03-05 01:13:15.056345 | orchestrator | =============================================================================== 2026-03-05 01:13:15.056354 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.78s 2026-03-05 01:13:15.056362 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 31.84s 2026-03-05 01:13:15.056376 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 11.20s 2026-03-05 01:13:15.056385 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.18s 2026-03-05 01:13:15.056394 | orchestrator | glance : Copying over config.json files for services -------------------- 6.99s 2026-03-05 01:13:15.056402 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.83s 2026-03-05 01:13:15.056411 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.27s 2026-03-05 01:13:15.056420 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.04s 2026-03-05 01:13:15.056429 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.75s 2026-03-05 01:13:15.056437 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.37s 2026-03-05 01:13:15.056446 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.12s 2026-03-05 01:13:15.056455 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.09s 2026-03-05 01:13:15.056463 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.98s 2026-03-05 01:13:15.056473 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.96s 2026-03-05 01:13:15.056482 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.53s 2026-03-05 01:13:15.056492 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.34s 2026-03-05 01:13:15.056500 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.23s 2026-03-05 01:13:15.056507 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.15s 2026-03-05 01:13:15.056514 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.80s 2026-03-05 01:13:15.056522 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.79s 2026-03-05 01:13:15.056529 | orchestrator | 2026-03-05 01:13:15 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:15.057342 | orchestrator | 2026-03-05 01:13:15 | INFO  | Task 621c17f3-9c5e-4d23-b2a0-77f2c76add60 is in state SUCCESS 2026-03-05 01:13:15.060213 | orchestrator | 2026-03-05 01:13:15.060257 | orchestrator | 2026-03-05 01:13:15.060267 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:13:15.060275 | orchestrator | 2026-03-05 01:13:15.060283 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:13:15.060292 | orchestrator | Thursday 05 March 2026 01:09:52 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-03-05 01:13:15.060300 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:13:15.060308 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:13:15.060316 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:13:15.060324 | orchestrator | 2026-03-05 01:13:15.060332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:13:15.060340 | orchestrator | Thursday 05 March 2026 01:09:52 +0000 (0:00:00.305) 0:00:00.603 ******** 2026-03-05 01:13:15.060348 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-05 01:13:15.060357 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-05 01:13:15.060365 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-05 01:13:15.060373 | orchestrator | 2026-03-05 01:13:15.060381 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-05 01:13:15.060389 | orchestrator | 2026-03-05 01:13:15.060397 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:13:15.060405 | orchestrator | Thursday 05 March 2026 01:09:52 +0000 (0:00:00.448) 0:00:01.051 ******** 2026-03-05 01:13:15.060413 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:13:15.060422 | orchestrator | 2026-03-05 01:13:15.060430 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-05 01:13:15.060450 | orchestrator | Thursday 05 March 2026 01:09:53 +0000 (0:00:00.599) 0:00:01.651 ******** 2026-03-05 01:13:15.060458 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-05 01:13:15.060466 | orchestrator | 2026-03-05 01:13:15.060482 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-05 01:13:15.060491 | orchestrator | Thursday 05 March 2026 01:09:56 +0000 (0:00:03.266) 0:00:04.917 ******** 2026-03-05 01:13:15.060499 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-05 01:13:15.060507 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-05 01:13:15.060515 | orchestrator | 2026-03-05 01:13:15.060523 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-05 01:13:15.060531 | orchestrator | Thursday 05 March 2026 01:10:04 +0000 (0:00:07.564) 0:00:12.482 ******** 2026-03-05 01:13:15.060539 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:13:15.060547 | orchestrator | 2026-03-05 01:13:15.060556 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-05 01:13:15.060564 | orchestrator | Thursday 05 March 2026 01:10:07 +0000 (0:00:03.642) 0:00:16.125 ******** 2026-03-05 01:13:15.060572 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:13:15.060580 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-05 01:13:15.060588 | orchestrator | 2026-03-05 01:13:15.060596 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-05 01:13:15.060604 | orchestrator | Thursday 05 March 2026 01:10:12 +0000 (0:00:04.256) 0:00:20.381 ******** 2026-03-05 01:13:15.060612 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:13:15.060621 | orchestrator | 2026-03-05 01:13:15.060628 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-05 01:13:15.060637 | orchestrator | Thursday 05 March 2026 01:10:16 +0000 (0:00:04.267) 0:00:24.649 ******** 2026-03-05 01:13:15.060645 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-05 01:13:15.060653 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-05 01:13:15.060661 | orchestrator | 2026-03-05 01:13:15.060669 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-05 01:13:15.060677 | orchestrator | Thursday 05 March 2026 01:10:25 +0000 (0:00:08.995) 0:00:33.644 ******** 2026-03-05 01:13:15.060687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.060707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.060744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.060754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.060764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.060772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.060781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.060813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.060835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.060846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.060856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.060866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.060875 | orchestrator | 2026-03-05 01:13:15.060885 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:13:15.060894 | orchestrator | Thursday 05 March 2026 01:10:28 +0000 (0:00:03.420) 0:00:37.064 ******** 2026-03-05 01:13:15.060904 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.060913 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.061256 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.061264 | orchestrator | 2026-03-05 01:13:15.061279 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:13:15.061287 | orchestrator | Thursday 05 March 2026 01:10:29 +0000 (0:00:00.301) 0:00:37.366 ******** 2026-03-05 01:13:15.061295 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:13:15.061303 | orchestrator | 2026-03-05 01:13:15.061317 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-05 01:13:15.061325 | orchestrator | Thursday 05 March 2026 01:10:29 +0000 (0:00:00.777) 0:00:38.143 ******** 2026-03-05 01:13:15.061333 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-05 01:13:15.061341 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-05 01:13:15.061349 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-05 01:13:15.061357 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-05 01:13:15.061365 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-05 01:13:15.061373 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-05 01:13:15.061381 | orchestrator | 2026-03-05 01:13:15.061389 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-05 01:13:15.061397 | orchestrator | Thursday 05 March 2026 01:10:33 +0000 (0:00:03.296) 0:00:41.439 ******** 2026-03-05 01:13:15.061419 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:13:15.061435 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:13:15.061450 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:13:15.061471 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:13:15.061494 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:13:15.061514 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-05 01:13:15.061530 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:13:15.061546 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:13:15.061556 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:13:15.061575 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:13:15.061584 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:13:15.061595 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-05 01:13:15.061604 | orchestrator | 2026-03-05 01:13:15.061612 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-05 01:13:15.061620 | orchestrator | Thursday 05 March 2026 01:10:38 +0000 (0:00:04.906) 0:00:46.346 ******** 2026-03-05 01:13:15.061629 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:13:15.061637 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:13:15.061645 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-05 01:13:15.061653 | orchestrator | 2026-03-05 01:13:15.061661 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-05 01:13:15.061669 | orchestrator | Thursday 05 March 2026 01:10:40 +0000 (0:00:02.407) 0:00:48.753 ******** 2026-03-05 01:13:15.061677 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-05 01:13:15.061690 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-05 01:13:15.061699 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-05 01:13:15.061707 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:13:15.061715 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:13:15.061723 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-05 01:13:15.061730 | orchestrator | 2026-03-05 01:13:15.061738 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-05 01:13:15.061746 | orchestrator | Thursday 05 March 2026 01:10:43 +0000 (0:00:02.758) 0:00:51.511 ******** 2026-03-05 01:13:15.061754 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-05 01:13:15.061762 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-05 01:13:15.061770 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-05 01:13:15.061779 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-05 01:13:15.061787 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-05 01:13:15.062265 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-05 01:13:15.062286 | orchestrator | 2026-03-05 01:13:15.062294 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-05 01:13:15.062302 | orchestrator | Thursday 05 March 2026 01:10:45 +0000 (0:00:01.878) 0:00:53.390 ******** 2026-03-05 01:13:15.062311 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.062319 | orchestrator | 2026-03-05 01:13:15.062327 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-05 01:13:15.062335 | orchestrator | Thursday 05 March 2026 01:10:45 +0000 (0:00:00.466) 0:00:53.857 ******** 2026-03-05 01:13:15.062343 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.062352 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.062386 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.062395 | orchestrator | 2026-03-05 01:13:15.062404 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:13:15.062412 | orchestrator | Thursday 05 March 2026 01:10:46 +0000 (0:00:00.864) 0:00:54.721 ******** 2026-03-05 01:13:15.062420 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:13:15.062433 | orchestrator | 2026-03-05 01:13:15.062447 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-05 01:13:15.062461 | orchestrator | Thursday 05 March 2026 01:10:49 +0000 (0:00:02.711) 0:00:57.432 ******** 2026-03-05 01:13:15.062482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.062499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.062524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.062540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.062587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.062597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.062610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.062628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.062636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.062645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.062658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.062667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.062675 | orchestrator | 2026-03-05 01:13:15.062683 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-05 01:13:15.062691 | orchestrator | Thursday 05 March 2026 01:10:55 +0000 (0:00:06.361) 0:01:03.794 ******** 2026-03-05 01:13:15.062703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:13:15.062717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062748 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.062756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:13:15.062768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062817 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.062827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:13:15.062840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062878 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.062886 | orchestrator | 2026-03-05 01:13:15.062895 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-05 01:13:15.062904 | orchestrator | Thursday 05 March 2026 01:10:56 +0000 (0:00:01.399) 0:01:05.194 ******** 2026-03-05 01:13:15.062914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:13:15.062924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062965 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.062979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:13:15.062990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.062999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063021 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.063032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:13:15.063050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063080 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.063089 | orchestrator | 2026-03-05 01:13:15.063099 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-05 01:13:15.063108 | orchestrator | Thursday 05 March 2026 01:10:59 +0000 (0:00:02.349) 0:01:07.543 ******** 2026-03-05 01:13:15.063118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.063134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.063152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.063162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063265 | orchestrator | 2026-03-05 01:13:15.063274 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-05 01:13:15.063284 | orchestrator | Thursday 05 March 2026 01:11:04 +0000 (0:00:05.546) 0:01:13.090 ******** 2026-03-05 01:13:15.063293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-05 01:13:15.063309 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-05 01:13:15.063318 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-05 01:13:15.063326 | orchestrator | 2026-03-05 01:13:15.063334 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-05 01:13:15.063341 | orchestrator | Thursday 05 March 2026 01:11:07 +0000 (0:00:02.385) 0:01:15.476 ******** 2026-03-05 01:13:15.063350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.063362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.063370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.063379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.063523 | orchestrator | 2026-03-05 01:13:15.063531 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-05 01:13:15.063540 | orchestrator | Thursday 05 March 2026 01:11:28 +0000 (0:00:21.636) 0:01:37.113 ******** 2026-03-05 01:13:15.063548 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.063556 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:13:15.063564 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:13:15.063572 | orchestrator | 2026-03-05 01:13:15.063584 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-05 01:13:15.063592 | orchestrator | Thursday 05 March 2026 01:11:30 +0000 (0:00:01.932) 0:01:39.046 ******** 2026-03-05 01:13:15.063601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:13:15.063609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063647 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.063655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:13:15.063667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063700 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.063713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-05 01:13:15.063721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-05 01:13:15.063750 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.063758 | orchestrator | 2026-03-05 01:13:15.063766 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-05 01:13:15.063774 | orchestrator | Thursday 05 March 2026 01:11:32 +0000 (0:00:01.317) 0:01:40.363 ******** 2026-03-05 01:13:15.063782 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.063791 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.063907 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.063928 | orchestrator | 2026-03-05 01:13:15.063937 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-05 01:13:15.063954 | orchestrator | Thursday 05 March 2026 01:11:32 +0000 (0:00:00.658) 0:01:41.022 ******** 2026-03-05 01:13:15.063963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.063983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.063992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-05 01:13:15.064006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.064015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.064028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.064036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.064050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.064058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.064070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.064079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.064092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-05 01:13:15.064100 | orchestrator | 2026-03-05 01:13:15.064109 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-05 01:13:15.064117 | orchestrator | Thursday 05 March 2026 01:11:36 +0000 (0:00:03.903) 0:01:44.925 ******** 2026-03-05 01:13:15.064125 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.064133 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:13:15.064141 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:13:15.064149 | orchestrator | 2026-03-05 01:13:15.064157 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-05 01:13:15.064165 | orchestrator | Thursday 05 March 2026 01:11:37 +0000 (0:00:00.986) 0:01:45.911 ******** 2026-03-05 01:13:15.064173 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.064181 | orchestrator | 2026-03-05 01:13:15.064189 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-05 01:13:15.064197 | orchestrator | Thursday 05 March 2026 01:11:40 +0000 (0:00:02.568) 0:01:48.480 ******** 2026-03-05 01:13:15.064205 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.064213 | orchestrator | 2026-03-05 01:13:15.064221 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-05 01:13:15.064233 | orchestrator | Thursday 05 March 2026 01:11:43 +0000 (0:00:02.884) 0:01:51.365 ******** 2026-03-05 01:13:15.064242 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.064251 | orchestrator | 2026-03-05 01:13:15.064259 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-05 01:13:15.064267 | orchestrator | Thursday 05 March 2026 01:12:06 +0000 (0:00:23.682) 0:02:15.047 ******** 2026-03-05 01:13:15.064274 | orchestrator | 2026-03-05 01:13:15.064282 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-05 01:13:15.064290 | orchestrator | Thursday 05 March 2026 01:12:06 +0000 (0:00:00.075) 0:02:15.123 ******** 2026-03-05 01:13:15.064298 | orchestrator | 2026-03-05 01:13:15.064306 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-05 01:13:15.064314 | orchestrator | Thursday 05 March 2026 01:12:07 +0000 (0:00:00.129) 0:02:15.252 ******** 2026-03-05 01:13:15.064322 | orchestrator | 2026-03-05 01:13:15.064330 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-05 01:13:15.064337 | orchestrator | Thursday 05 March 2026 01:12:07 +0000 (0:00:00.130) 0:02:15.383 ******** 2026-03-05 01:13:15.064345 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.064353 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:13:15.064361 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:13:15.064368 | orchestrator | 2026-03-05 01:13:15.064375 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-05 01:13:15.064381 | orchestrator | Thursday 05 March 2026 01:12:34 +0000 (0:00:26.873) 0:02:42.256 ******** 2026-03-05 01:13:15.064388 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.064395 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:13:15.064406 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:13:15.064413 | orchestrator | 2026-03-05 01:13:15.064420 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-05 01:13:15.064429 | orchestrator | Thursday 05 March 2026 01:12:40 +0000 (0:00:06.787) 0:02:49.044 ******** 2026-03-05 01:13:15.064436 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.064443 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:13:15.064450 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:13:15.064457 | orchestrator | 2026-03-05 01:13:15.064463 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-05 01:13:15.064470 | orchestrator | Thursday 05 March 2026 01:13:00 +0000 (0:00:19.281) 0:03:08.326 ******** 2026-03-05 01:13:15.064477 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:13:15.064484 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:13:15.064490 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:13:15.064497 | orchestrator | 2026-03-05 01:13:15.064504 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-05 01:13:15.064511 | orchestrator | Thursday 05 March 2026 01:13:11 +0000 (0:00:11.586) 0:03:19.912 ******** 2026-03-05 01:13:15.064518 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:13:15.064524 | orchestrator | 2026-03-05 01:13:15.064531 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:13:15.064538 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-05 01:13:15.064545 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:13:15.064552 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:13:15.064559 | orchestrator | 2026-03-05 01:13:15.064566 | orchestrator | 2026-03-05 01:13:15.064572 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:13:15.064579 | orchestrator | Thursday 05 March 2026 01:13:11 +0000 (0:00:00.282) 0:03:20.194 ******** 2026-03-05 01:13:15.064586 | orchestrator | =============================================================================== 2026-03-05 01:13:15.064593 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.87s 2026-03-05 01:13:15.064599 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 23.68s 2026-03-05 01:13:15.064606 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 21.64s 2026-03-05 01:13:15.064613 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 19.28s 2026-03-05 01:13:15.064619 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.59s 2026-03-05 01:13:15.064626 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.00s 2026-03-05 01:13:15.064633 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.56s 2026-03-05 01:13:15.064639 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.79s 2026-03-05 01:13:15.064646 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 6.36s 2026-03-05 01:13:15.064653 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.55s 2026-03-05 01:13:15.064660 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.91s 2026-03-05 01:13:15.064667 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.27s 2026-03-05 01:13:15.064673 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.26s 2026-03-05 01:13:15.064680 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.90s 2026-03-05 01:13:15.064687 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.64s 2026-03-05 01:13:15.064694 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.42s 2026-03-05 01:13:15.064704 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.30s 2026-03-05 01:13:15.064714 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.27s 2026-03-05 01:13:15.064721 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.88s 2026-03-05 01:13:15.064728 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.76s 2026-03-05 01:13:15.064735 | orchestrator | 2026-03-05 01:13:15 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:15.064741 | orchestrator | 2026-03-05 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:18.101025 | orchestrator | 2026-03-05 01:13:18 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:18.104851 | orchestrator | 2026-03-05 01:13:18 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:18.105644 | orchestrator | 2026-03-05 01:13:18 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:18.105700 | orchestrator | 2026-03-05 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:21.146658 | orchestrator | 2026-03-05 01:13:21 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:21.146987 | orchestrator | 2026-03-05 01:13:21 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:21.148163 | orchestrator | 2026-03-05 01:13:21 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:21.148223 | orchestrator | 2026-03-05 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:24.192838 | orchestrator | 2026-03-05 01:13:24 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:24.197263 | orchestrator | 2026-03-05 01:13:24 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:24.200483 | orchestrator | 2026-03-05 01:13:24 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:24.200865 | orchestrator | 2026-03-05 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:27.253891 | orchestrator | 2026-03-05 01:13:27 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:27.255994 | orchestrator | 2026-03-05 01:13:27 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:27.257321 | orchestrator | 2026-03-05 01:13:27 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:27.257382 | orchestrator | 2026-03-05 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:30.313894 | orchestrator | 2026-03-05 01:13:30 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:30.314862 | orchestrator | 2026-03-05 01:13:30 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:30.316930 | orchestrator | 2026-03-05 01:13:30 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:30.316969 | orchestrator | 2026-03-05 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:33.348973 | orchestrator | 2026-03-05 01:13:33 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:33.350858 | orchestrator | 2026-03-05 01:13:33 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:33.351052 | orchestrator | 2026-03-05 01:13:33 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:33.351070 | orchestrator | 2026-03-05 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:36.393692 | orchestrator | 2026-03-05 01:13:36 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:36.396549 | orchestrator | 2026-03-05 01:13:36 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:36.399337 | orchestrator | 2026-03-05 01:13:36 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:36.399397 | orchestrator | 2026-03-05 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:39.446252 | orchestrator | 2026-03-05 01:13:39 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:39.448454 | orchestrator | 2026-03-05 01:13:39 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:39.450072 | orchestrator | 2026-03-05 01:13:39 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:39.450137 | orchestrator | 2026-03-05 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:42.496643 | orchestrator | 2026-03-05 01:13:42 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:42.498572 | orchestrator | 2026-03-05 01:13:42 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:42.500255 | orchestrator | 2026-03-05 01:13:42 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:42.500314 | orchestrator | 2026-03-05 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:45.550414 | orchestrator | 2026-03-05 01:13:45 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:45.552153 | orchestrator | 2026-03-05 01:13:45 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:45.554552 | orchestrator | 2026-03-05 01:13:45 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:45.554596 | orchestrator | 2026-03-05 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:48.596092 | orchestrator | 2026-03-05 01:13:48 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:48.598251 | orchestrator | 2026-03-05 01:13:48 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:48.599580 | orchestrator | 2026-03-05 01:13:48 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:48.599662 | orchestrator | 2026-03-05 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:51.641694 | orchestrator | 2026-03-05 01:13:51 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:51.643802 | orchestrator | 2026-03-05 01:13:51 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:51.644466 | orchestrator | 2026-03-05 01:13:51 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:51.644504 | orchestrator | 2026-03-05 01:13:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:54.690353 | orchestrator | 2026-03-05 01:13:54 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:54.693104 | orchestrator | 2026-03-05 01:13:54 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:54.694987 | orchestrator | 2026-03-05 01:13:54 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:54.695049 | orchestrator | 2026-03-05 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:13:57.731353 | orchestrator | 2026-03-05 01:13:57 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:13:57.733048 | orchestrator | 2026-03-05 01:13:57 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:13:57.735038 | orchestrator | 2026-03-05 01:13:57 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:13:57.735106 | orchestrator | 2026-03-05 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:00.776583 | orchestrator | 2026-03-05 01:14:00 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:00.778925 | orchestrator | 2026-03-05 01:14:00 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:14:00.781646 | orchestrator | 2026-03-05 01:14:00 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:14:00.783722 | orchestrator | 2026-03-05 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:03.825181 | orchestrator | 2026-03-05 01:14:03 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:03.826947 | orchestrator | 2026-03-05 01:14:03 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:14:03.829042 | orchestrator | 2026-03-05 01:14:03 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:14:03.829716 | orchestrator | 2026-03-05 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:06.870961 | orchestrator | 2026-03-05 01:14:06 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:06.873157 | orchestrator | 2026-03-05 01:14:06 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:14:06.874444 | orchestrator | 2026-03-05 01:14:06 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:14:06.874479 | orchestrator | 2026-03-05 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:09.926605 | orchestrator | 2026-03-05 01:14:09 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:09.928998 | orchestrator | 2026-03-05 01:14:09 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:14:09.932339 | orchestrator | 2026-03-05 01:14:09 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:14:09.932729 | orchestrator | 2026-03-05 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:12.971206 | orchestrator | 2026-03-05 01:14:12 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:12.972970 | orchestrator | 2026-03-05 01:14:12 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:14:12.974731 | orchestrator | 2026-03-05 01:14:12 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:14:12.974801 | orchestrator | 2026-03-05 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:16.020634 | orchestrator | 2026-03-05 01:14:16 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:16.029250 | orchestrator | 2026-03-05 01:14:16 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:14:16.031328 | orchestrator | 2026-03-05 01:14:16 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:14:16.031429 | orchestrator | 2026-03-05 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:19.081723 | orchestrator | 2026-03-05 01:14:19 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:19.085288 | orchestrator | 2026-03-05 01:14:19 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:14:19.088200 | orchestrator | 2026-03-05 01:14:19 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:14:19.088271 | orchestrator | 2026-03-05 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:22.135895 | orchestrator | 2026-03-05 01:14:22 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:22.137156 | orchestrator | 2026-03-05 01:14:22 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:14:22.138297 | orchestrator | 2026-03-05 01:14:22 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:14:22.138323 | orchestrator | 2026-03-05 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:25.183994 | orchestrator | 2026-03-05 01:14:25 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:25.186344 | orchestrator | 2026-03-05 01:14:25 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:14:25.189087 | orchestrator | 2026-03-05 01:14:25 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:14:25.189156 | orchestrator | 2026-03-05 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:14:28.223121 | orchestrator | 2026-03-05 01:14:28 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state STARTED 2026-03-05 01:14:28.224787 | orchestrator | 2026-03-05 01:14:28 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state STARTED 2026-03-05 01:16:28.338844 | orchestrator | 2026-03-05 01:16:28 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:28.338927 | orchestrator | 2026-03-05 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:31.384304 | orchestrator | 2026-03-05 01:16:31 | INFO  | Task edf8aca7-aaf1-4e9d-8bb7-db737b968961 is in state SUCCESS 2026-03-05 01:16:31.385979 | orchestrator | 2026-03-05 01:16:31.386058 | orchestrator | 2026-03-05 01:16:31.386066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:16:31.386071 | orchestrator | 2026-03-05 01:16:31.386076 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:16:31.386081 | orchestrator | Thursday 05 March 2026 01:13:17 +0000 (0:00:00.305) 0:00:00.305 ******** 2026-03-05 01:16:31.386085 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:16:31.386091 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:16:31.386095 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:16:31.386099 | orchestrator | 2026-03-05 01:16:31.386103 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:16:31.386107 | orchestrator | Thursday 05 March 2026 01:13:17 +0000 (0:00:00.355) 0:00:00.661 ******** 2026-03-05 01:16:31.386111 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-05 01:16:31.386115 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-05 01:16:31.386119 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-05 01:16:31.386123 | orchestrator | 2026-03-05 01:16:31.386127 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-05 01:16:31.386131 | orchestrator | 2026-03-05 01:16:31.386135 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-05 01:16:31.386139 | orchestrator | Thursday 05 March 2026 01:13:18 +0000 (0:00:00.505) 0:00:01.167 ******** 2026-03-05 01:16:31.386143 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:16:31.386148 | orchestrator | 2026-03-05 01:16:31.386152 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-05 01:16:31.386174 | orchestrator | Thursday 05 March 2026 01:13:18 +0000 (0:00:00.584) 0:00:01.752 ******** 2026-03-05 01:16:31.386181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386208 | orchestrator | 2026-03-05 01:16:31.386214 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-05 01:16:31.386221 | orchestrator | Thursday 05 March 2026 01:13:19 +0000 (0:00:00.738) 0:00:02.491 ******** 2026-03-05 01:16:31.386227 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-05 01:16:31.386233 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-05 01:16:31.386240 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:16:31.386527 | orchestrator | 2026-03-05 01:16:31.386544 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-05 01:16:31.386551 | orchestrator | Thursday 05 March 2026 01:13:20 +0000 (0:00:00.884) 0:00:03.375 ******** 2026-03-05 01:16:31.386557 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:16:31.386561 | orchestrator | 2026-03-05 01:16:31.386565 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-05 01:16:31.386570 | orchestrator | Thursday 05 March 2026 01:13:21 +0000 (0:00:00.761) 0:00:04.137 ******** 2026-03-05 01:16:31.386584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386606 | orchestrator | 2026-03-05 01:16:31.386632 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-05 01:16:31.386636 | orchestrator | Thursday 05 March 2026 01:13:22 +0000 (0:00:01.513) 0:00:05.651 ******** 2026-03-05 01:16:31.386640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:16:31.386644 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:16:31.386649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:16:31.386653 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:16:31.386664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:16:31.386668 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:16:31.386676 | orchestrator | 2026-03-05 01:16:31.386680 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-05 01:16:31.386683 | orchestrator | Thursday 05 March 2026 01:13:23 +0000 (0:00:00.535) 0:00:06.187 ******** 2026-03-05 01:16:31.386687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:16:31.386691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:16:31.386695 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:16:31.386699 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:16:31.386706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-05 01:16:31.386710 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:16:31.386714 | orchestrator | 2026-03-05 01:16:31.386718 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-05 01:16:31.386721 | orchestrator | Thursday 05 March 2026 01:13:24 +0000 (0:00:00.856) 0:00:07.043 ******** 2026-03-05 01:16:31.386725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386744 | orchestrator | 2026-03-05 01:16:31.386748 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-05 01:16:31.386752 | orchestrator | Thursday 05 March 2026 01:13:25 +0000 (0:00:01.288) 0:00:08.332 ******** 2026-03-05 01:16:31.386756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.386771 | orchestrator | 2026-03-05 01:16:31.386775 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-05 01:16:31.386779 | orchestrator | Thursday 05 March 2026 01:13:26 +0000 (0:00:01.390) 0:00:09.722 ******** 2026-03-05 01:16:31.386783 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:16:31.386787 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:16:31.386791 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:16:31.386795 | orchestrator | 2026-03-05 01:16:31.386799 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-05 01:16:31.386802 | orchestrator | Thursday 05 March 2026 01:13:27 +0000 (0:00:00.536) 0:00:10.259 ******** 2026-03-05 01:16:31.386806 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-05 01:16:31.386814 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-05 01:16:31.386818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-05 01:16:31.386822 | orchestrator | 2026-03-05 01:16:31.386825 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-05 01:16:31.386829 | orchestrator | Thursday 05 March 2026 01:13:28 +0000 (0:00:01.307) 0:00:11.567 ******** 2026-03-05 01:16:31.386833 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-05 01:16:31.386840 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-05 01:16:31.386844 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-05 01:16:31.386848 | orchestrator | 2026-03-05 01:16:31.386851 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-05 01:16:31.386855 | orchestrator | Thursday 05 March 2026 01:13:29 +0000 (0:00:01.280) 0:00:12.847 ******** 2026-03-05 01:16:31.386859 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:16:31.386863 | orchestrator | 2026-03-05 01:16:31.386867 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-05 01:16:31.386873 | orchestrator | Thursday 05 March 2026 01:13:30 +0000 (0:00:00.839) 0:00:13.686 ******** 2026-03-05 01:16:31.386880 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-05 01:16:31.386886 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-05 01:16:31.386893 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:16:31.386899 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:16:31.387099 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:16:31.387111 | orchestrator | 2026-03-05 01:16:31.387116 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-05 01:16:31.387119 | orchestrator | Thursday 05 March 2026 01:13:31 +0000 (0:00:00.731) 0:00:14.418 ******** 2026-03-05 01:16:31.387123 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:16:31.387128 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:16:31.387132 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:16:31.387135 | orchestrator | 2026-03-05 01:16:31.387139 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-05 01:16:31.387143 | orchestrator | Thursday 05 March 2026 01:13:32 +0000 (0:00:00.591) 0:00:15.009 ******** 2026-03-05 01:16:31.387148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1324271, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9663467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1324271, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9663467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1324271, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9663467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1324376, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.998582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1324376, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.998582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1324376, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.998582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1324287, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9687033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1324287, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9687033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1324287, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9687033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1324380, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1324380, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1324380, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1324354, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9943495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1324354, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9943495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1324354, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9943495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1324366, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9966807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1324366, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9966807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1324366, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9966807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1324267, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9649425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1324267, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9649425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1324267, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9649425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1324278, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9672172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1324278, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9672172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1324278, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9672172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1324288, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9687033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1324288, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9687033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1324288, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9687033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1324360, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9955788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1324360, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9955788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1324360, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9955788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1324371, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9980397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1324371, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9980397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1324371, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9980397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1324284, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.968316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1324284, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.968316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1324284, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.968316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1324365, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9957037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1324365, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9957037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1324365, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9957037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1324356, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9949496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1324356, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9949496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1324356, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9949496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1324347, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9934285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1324347, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9934285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1324347, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9934285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1324345, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9917037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1324345, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9917037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1324345, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9917037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1324364, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9957037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1324364, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9957037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1324364, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9957037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1324290, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9912992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1324290, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9912992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1324290, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9912992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1324369, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9973388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1324369, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9973388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1324369, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670030.9973388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1324912, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0875387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1324912, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0875387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1324912, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0875387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1324772, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0668643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1324772, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0668643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1324772, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0668643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1324759, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0597048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1324759, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0597048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1324759, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0597048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1324805, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0694366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1324805, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0694366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1324805, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0694366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1324387, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0007534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1324387, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0007534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1324387, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0007534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1324856, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0763922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1324856, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0763922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1324856, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0763922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1324809, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0739188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1324809, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0739188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1324809, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0739188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1324860, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0767052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1324860, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0767052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1324860, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0767052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1324896, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0850122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1324896, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0850122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1324896, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0850122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1324851, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0759604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1324851, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0759604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1324851, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0759604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1324794, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0678873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1324794, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0678873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1324794, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0678873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1324768, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0633411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1324768, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0633411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1324768, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0633411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1324790, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0677397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1324790, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0677397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1324790, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0677397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1324760, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0624511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1324760, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0624511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.387994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1324760, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0624511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1324795, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0686138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1324795, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0686138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1324795, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0686138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1324869, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0839756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1324869, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0839756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1324869, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0839756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1324864, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0794015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1324864, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0794015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1324864, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0794015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1324390, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0587049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1324390, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0587049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1324390, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0587049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1324755, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0597048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1324755, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0597048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1324755, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0597048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1324836, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0752583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1324836, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0752583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1324836, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0752583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1324863, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0779176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1324863, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0779176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1324863, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772670031.0779176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-05 01:16:31.388121 | orchestrator | 2026-03-05 01:16:31.388125 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-05 01:16:31.388130 | orchestrator | Thursday 05 March 2026 01:14:11 +0000 (0:00:39.174) 0:00:54.183 ******** 2026-03-05 01:16:31.388153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.388158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.388162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-05 01:16:31.388170 | orchestrator | 2026-03-05 01:16:31.388174 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-05 01:16:31.388178 | orchestrator | Thursday 05 March 2026 01:14:12 +0000 (0:00:01.041) 0:00:55.225 ******** 2026-03-05 01:16:31.388182 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:16:31.388186 | orchestrator | 2026-03-05 01:16:31.388190 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-05 01:16:31.388194 | orchestrator | Thursday 05 March 2026 01:14:14 +0000 (0:00:02.554) 0:00:57.779 ******** 2026-03-05 01:16:31.388198 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:16:31.388202 | orchestrator | 2026-03-05 01:16:31.388208 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-05 01:16:31.388212 | orchestrator | Thursday 05 March 2026 01:14:17 +0000 (0:00:02.552) 0:01:00.331 ******** 2026-03-05 01:16:31.388216 | orchestrator | 2026-03-05 01:16:31.388219 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-05 01:16:31.388223 | orchestrator | Thursday 05 March 2026 01:14:17 +0000 (0:00:00.078) 0:01:00.410 ******** 2026-03-05 01:16:31.388227 | orchestrator | 2026-03-05 01:16:31.388231 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-05 01:16:31.388235 | orchestrator | Thursday 05 March 2026 01:14:17 +0000 (0:00:00.062) 0:01:00.472 ******** 2026-03-05 01:16:31.388238 | orchestrator | 2026-03-05 01:16:31.388242 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-05 01:16:31.388246 | orchestrator | Thursday 05 March 2026 01:14:17 +0000 (0:00:00.271) 0:01:00.743 ******** 2026-03-05 01:16:31.388250 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:16:31.388253 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:16:31.388257 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:16:31.388261 | orchestrator | 2026-03-05 01:16:31.388265 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-05 01:16:31.388269 | orchestrator | Thursday 05 March 2026 01:14:24 +0000 (0:00:06.982) 0:01:07.725 ******** 2026-03-05 01:16:31.388273 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:16:31.388276 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:16:31.388280 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-05 01:16:31.388284 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-05 01:16:31.388288 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-05 01:16:31.388292 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:16:31.388296 | orchestrator | 2026-03-05 01:16:31.388300 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-05 01:16:31.388306 | orchestrator | Thursday 05 March 2026 01:15:04 +0000 (0:00:40.037) 0:01:47.763 ******** 2026-03-05 01:16:31.388313 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:16:31.388319 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:16:31.388325 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:16:31.388331 | orchestrator | 2026-03-05 01:16:31.388337 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-05 01:16:31.388343 | orchestrator | Thursday 05 March 2026 01:15:34 +0000 (0:00:29.324) 0:02:17.088 ******** 2026-03-05 01:16:31.388349 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:16:31.388359 | orchestrator | 2026-03-05 01:16:31.388365 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-05 01:16:31.388371 | orchestrator | Thursday 05 March 2026 01:15:36 +0000 (0:00:02.175) 0:02:19.263 ******** 2026-03-05 01:16:31.388377 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:16:31.388383 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:16:31.388393 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:16:31.388400 | orchestrator | 2026-03-05 01:16:31.388406 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-05 01:16:31.388412 | orchestrator | Thursday 05 March 2026 01:15:36 +0000 (0:00:00.569) 0:02:19.833 ******** 2026-03-05 01:16:31.388418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-05 01:16:31.388425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-05 01:16:31.388429 | orchestrator | 2026-03-05 01:16:31.388433 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-05 01:16:31.388437 | orchestrator | Thursday 05 March 2026 01:15:39 +0000 (0:00:02.213) 0:02:22.047 ******** 2026-03-05 01:16:31.388441 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:16:31.388444 | orchestrator | 2026-03-05 01:16:31.388448 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:16:31.388452 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:16:31.388458 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:16:31.388462 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:16:31.388466 | orchestrator | 2026-03-05 01:16:31.388470 | orchestrator | 2026-03-05 01:16:31.388474 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:16:31.388478 | orchestrator | Thursday 05 March 2026 01:15:39 +0000 (0:00:00.273) 0:02:22.320 ******** 2026-03-05 01:16:31.388482 | orchestrator | =============================================================================== 2026-03-05 01:16:31.388486 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 40.04s 2026-03-05 01:16:31.388490 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.17s 2026-03-05 01:16:31.388497 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.32s 2026-03-05 01:16:31.388501 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.98s 2026-03-05 01:16:31.388504 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.55s 2026-03-05 01:16:31.388508 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.55s 2026-03-05 01:16:31.388512 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.21s 2026-03-05 01:16:31.388516 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.18s 2026-03-05 01:16:31.388520 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.51s 2026-03-05 01:16:31.388523 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.39s 2026-03-05 01:16:31.388527 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.31s 2026-03-05 01:16:31.388531 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.29s 2026-03-05 01:16:31.388539 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.28s 2026-03-05 01:16:31.388543 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.04s 2026-03-05 01:16:31.388547 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.88s 2026-03-05 01:16:31.388550 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.86s 2026-03-05 01:16:31.388554 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.84s 2026-03-05 01:16:31.388558 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.76s 2026-03-05 01:16:31.388562 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.74s 2026-03-05 01:16:31.388566 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2026-03-05 01:16:31.388569 | orchestrator | 2026-03-05 01:16:31 | INFO  | Task 8920b535-26d2-4cb7-b2e7-0154ffcf39bc is in state SUCCESS 2026-03-05 01:16:31.388573 | orchestrator | 2026-03-05 01:16:31 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:31.388577 | orchestrator | 2026-03-05 01:16:31 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:31.388581 | orchestrator | 2026-03-05 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:34.431605 | orchestrator | 2026-03-05 01:16:34 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:34.433855 | orchestrator | 2026-03-05 01:16:34 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:34.434373 | orchestrator | 2026-03-05 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:37.480913 | orchestrator | 2026-03-05 01:16:37 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:37.483405 | orchestrator | 2026-03-05 01:16:37 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:37.483981 | orchestrator | 2026-03-05 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:40.527237 | orchestrator | 2026-03-05 01:16:40 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:40.527781 | orchestrator | 2026-03-05 01:16:40 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:40.527836 | orchestrator | 2026-03-05 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:43.565697 | orchestrator | 2026-03-05 01:16:43 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:43.567488 | orchestrator | 2026-03-05 01:16:43 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:43.567682 | orchestrator | 2026-03-05 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:46.617893 | orchestrator | 2026-03-05 01:16:46 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:46.620432 | orchestrator | 2026-03-05 01:16:46 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:46.620512 | orchestrator | 2026-03-05 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:49.657199 | orchestrator | 2026-03-05 01:16:49 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:49.660407 | orchestrator | 2026-03-05 01:16:49 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:49.660478 | orchestrator | 2026-03-05 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:52.703005 | orchestrator | 2026-03-05 01:16:52 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:52.704160 | orchestrator | 2026-03-05 01:16:52 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:52.704528 | orchestrator | 2026-03-05 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:55.739493 | orchestrator | 2026-03-05 01:16:55 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:55.739970 | orchestrator | 2026-03-05 01:16:55 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:55.740241 | orchestrator | 2026-03-05 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:16:58.771246 | orchestrator | 2026-03-05 01:16:58 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:16:58.772669 | orchestrator | 2026-03-05 01:16:58 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:16:58.773640 | orchestrator | 2026-03-05 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:01.810402 | orchestrator | 2026-03-05 01:17:01 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:01.810973 | orchestrator | 2026-03-05 01:17:01 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:01.811212 | orchestrator | 2026-03-05 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:04.906519 | orchestrator | 2026-03-05 01:17:04 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:04.908909 | orchestrator | 2026-03-05 01:17:04 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:04.908988 | orchestrator | 2026-03-05 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:07.958086 | orchestrator | 2026-03-05 01:17:07 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:07.959086 | orchestrator | 2026-03-05 01:17:07 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:07.959123 | orchestrator | 2026-03-05 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:11.019798 | orchestrator | 2026-03-05 01:17:11 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:11.022716 | orchestrator | 2026-03-05 01:17:11 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:11.022814 | orchestrator | 2026-03-05 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:14.066098 | orchestrator | 2026-03-05 01:17:14 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:14.067691 | orchestrator | 2026-03-05 01:17:14 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:14.067770 | orchestrator | 2026-03-05 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:17.108472 | orchestrator | 2026-03-05 01:17:17 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:17.110275 | orchestrator | 2026-03-05 01:17:17 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:17.110332 | orchestrator | 2026-03-05 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:20.150800 | orchestrator | 2026-03-05 01:17:20 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:20.153196 | orchestrator | 2026-03-05 01:17:20 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:20.153282 | orchestrator | 2026-03-05 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:23.188743 | orchestrator | 2026-03-05 01:17:23 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:23.189658 | orchestrator | 2026-03-05 01:17:23 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:23.189684 | orchestrator | 2026-03-05 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:26.228043 | orchestrator | 2026-03-05 01:17:26 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:26.229717 | orchestrator | 2026-03-05 01:17:26 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:26.229791 | orchestrator | 2026-03-05 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:29.272088 | orchestrator | 2026-03-05 01:17:29 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:29.272188 | orchestrator | 2026-03-05 01:17:29 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:29.272202 | orchestrator | 2026-03-05 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:32.315239 | orchestrator | 2026-03-05 01:17:32 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:32.316378 | orchestrator | 2026-03-05 01:17:32 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:32.316462 | orchestrator | 2026-03-05 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:35.365721 | orchestrator | 2026-03-05 01:17:35 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:35.367118 | orchestrator | 2026-03-05 01:17:35 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:35.367174 | orchestrator | 2026-03-05 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:38.415538 | orchestrator | 2026-03-05 01:17:38 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:38.418200 | orchestrator | 2026-03-05 01:17:38 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:38.418253 | orchestrator | 2026-03-05 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:41.465368 | orchestrator | 2026-03-05 01:17:41 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:41.467889 | orchestrator | 2026-03-05 01:17:41 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:41.467964 | orchestrator | 2026-03-05 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:44.519906 | orchestrator | 2026-03-05 01:17:44 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:44.520709 | orchestrator | 2026-03-05 01:17:44 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:44.520757 | orchestrator | 2026-03-05 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:47.562681 | orchestrator | 2026-03-05 01:17:47 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:47.565038 | orchestrator | 2026-03-05 01:17:47 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:47.565126 | orchestrator | 2026-03-05 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:50.621457 | orchestrator | 2026-03-05 01:17:50 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:50.623468 | orchestrator | 2026-03-05 01:17:50 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:50.623612 | orchestrator | 2026-03-05 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:53.657745 | orchestrator | 2026-03-05 01:17:53 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:53.658007 | orchestrator | 2026-03-05 01:17:53 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:53.658137 | orchestrator | 2026-03-05 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:56.693572 | orchestrator | 2026-03-05 01:17:56 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:56.693669 | orchestrator | 2026-03-05 01:17:56 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:56.693680 | orchestrator | 2026-03-05 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:17:59.735405 | orchestrator | 2026-03-05 01:17:59 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:17:59.737640 | orchestrator | 2026-03-05 01:17:59 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:17:59.738268 | orchestrator | 2026-03-05 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:02.787345 | orchestrator | 2026-03-05 01:18:02 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:02.787440 | orchestrator | 2026-03-05 01:18:02 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:02.787450 | orchestrator | 2026-03-05 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:05.827471 | orchestrator | 2026-03-05 01:18:05 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:05.827619 | orchestrator | 2026-03-05 01:18:05 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:05.827628 | orchestrator | 2026-03-05 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:08.867274 | orchestrator | 2026-03-05 01:18:08 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:08.868934 | orchestrator | 2026-03-05 01:18:08 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:08.869214 | orchestrator | 2026-03-05 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:11.915019 | orchestrator | 2026-03-05 01:18:11 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:11.915414 | orchestrator | 2026-03-05 01:18:11 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:11.915440 | orchestrator | 2026-03-05 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:14.963068 | orchestrator | 2026-03-05 01:18:14 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:14.964915 | orchestrator | 2026-03-05 01:18:14 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:14.964972 | orchestrator | 2026-03-05 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:18.012702 | orchestrator | 2026-03-05 01:18:18 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:18.014847 | orchestrator | 2026-03-05 01:18:18 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:18.014913 | orchestrator | 2026-03-05 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:21.061937 | orchestrator | 2026-03-05 01:18:21 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:21.063876 | orchestrator | 2026-03-05 01:18:21 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:21.063952 | orchestrator | 2026-03-05 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:24.139322 | orchestrator | 2026-03-05 01:18:24 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:24.141002 | orchestrator | 2026-03-05 01:18:24 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:24.141076 | orchestrator | 2026-03-05 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:27.184320 | orchestrator | 2026-03-05 01:18:27 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:27.184899 | orchestrator | 2026-03-05 01:18:27 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:27.184933 | orchestrator | 2026-03-05 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:30.239026 | orchestrator | 2026-03-05 01:18:30 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:30.239111 | orchestrator | 2026-03-05 01:18:30 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:30.239122 | orchestrator | 2026-03-05 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:33.277040 | orchestrator | 2026-03-05 01:18:33 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:33.279889 | orchestrator | 2026-03-05 01:18:33 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:33.279960 | orchestrator | 2026-03-05 01:18:33 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:36.341300 | orchestrator | 2026-03-05 01:18:36 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:36.342583 | orchestrator | 2026-03-05 01:18:36 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:36.342642 | orchestrator | 2026-03-05 01:18:36 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:39.389228 | orchestrator | 2026-03-05 01:18:39 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:39.392228 | orchestrator | 2026-03-05 01:18:39 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:39.392833 | orchestrator | 2026-03-05 01:18:39 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:42.439832 | orchestrator | 2026-03-05 01:18:42 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:42.441948 | orchestrator | 2026-03-05 01:18:42 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:42.441991 | orchestrator | 2026-03-05 01:18:42 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:45.475393 | orchestrator | 2026-03-05 01:18:45 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:45.475661 | orchestrator | 2026-03-05 01:18:45 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:45.475729 | orchestrator | 2026-03-05 01:18:45 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:48.524966 | orchestrator | 2026-03-05 01:18:48 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:48.527371 | orchestrator | 2026-03-05 01:18:48 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:48.527532 | orchestrator | 2026-03-05 01:18:48 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:51.576848 | orchestrator | 2026-03-05 01:18:51 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:51.577776 | orchestrator | 2026-03-05 01:18:51 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:51.577827 | orchestrator | 2026-03-05 01:18:51 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:54.613593 | orchestrator | 2026-03-05 01:18:54 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:54.613753 | orchestrator | 2026-03-05 01:18:54 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:54.613770 | orchestrator | 2026-03-05 01:18:54 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:18:57.662432 | orchestrator | 2026-03-05 01:18:57 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:18:57.663048 | orchestrator | 2026-03-05 01:18:57 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:18:57.663086 | orchestrator | 2026-03-05 01:18:57 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:00.705379 | orchestrator | 2026-03-05 01:19:00 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:00.708804 | orchestrator | 2026-03-05 01:19:00 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:00.708856 | orchestrator | 2026-03-05 01:19:00 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:03.745516 | orchestrator | 2026-03-05 01:19:03 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:03.747476 | orchestrator | 2026-03-05 01:19:03 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:03.747519 | orchestrator | 2026-03-05 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:06.799094 | orchestrator | 2026-03-05 01:19:06 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:06.800143 | orchestrator | 2026-03-05 01:19:06 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:06.800444 | orchestrator | 2026-03-05 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:09.846544 | orchestrator | 2026-03-05 01:19:09 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:09.847122 | orchestrator | 2026-03-05 01:19:09 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:09.847164 | orchestrator | 2026-03-05 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:12.897055 | orchestrator | 2026-03-05 01:19:12 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:12.897142 | orchestrator | 2026-03-05 01:19:12 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:12.897195 | orchestrator | 2026-03-05 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:15.953412 | orchestrator | 2026-03-05 01:19:15 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:15.954955 | orchestrator | 2026-03-05 01:19:15 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:15.955400 | orchestrator | 2026-03-05 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:18.986789 | orchestrator | 2026-03-05 01:19:18 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:18.986921 | orchestrator | 2026-03-05 01:19:18 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:18.986992 | orchestrator | 2026-03-05 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:22.028081 | orchestrator | 2026-03-05 01:19:22 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:22.028409 | orchestrator | 2026-03-05 01:19:22 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:22.028494 | orchestrator | 2026-03-05 01:19:22 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:25.086141 | orchestrator | 2026-03-05 01:19:25 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:25.086367 | orchestrator | 2026-03-05 01:19:25 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:25.086498 | orchestrator | 2026-03-05 01:19:25 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:28.110897 | orchestrator | 2026-03-05 01:19:28 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:28.111329 | orchestrator | 2026-03-05 01:19:28 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:28.111425 | orchestrator | 2026-03-05 01:19:28 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:31.135922 | orchestrator | 2026-03-05 01:19:31 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:31.138974 | orchestrator | 2026-03-05 01:19:31 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:31.139036 | orchestrator | 2026-03-05 01:19:31 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:34.189660 | orchestrator | 2026-03-05 01:19:34 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:34.192062 | orchestrator | 2026-03-05 01:19:34 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:34.192412 | orchestrator | 2026-03-05 01:19:34 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:37.251861 | orchestrator | 2026-03-05 01:19:37 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:37.256709 | orchestrator | 2026-03-05 01:19:37 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:37.256919 | orchestrator | 2026-03-05 01:19:37 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:40.301214 | orchestrator | 2026-03-05 01:19:40 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:40.302395 | orchestrator | 2026-03-05 01:19:40 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:40.302480 | orchestrator | 2026-03-05 01:19:40 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:43.346726 | orchestrator | 2026-03-05 01:19:43 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:43.348625 | orchestrator | 2026-03-05 01:19:43 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:43.348657 | orchestrator | 2026-03-05 01:19:43 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:46.406597 | orchestrator | 2026-03-05 01:19:46 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:46.407508 | orchestrator | 2026-03-05 01:19:46 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:46.407624 | orchestrator | 2026-03-05 01:19:46 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:49.470884 | orchestrator | 2026-03-05 01:19:49 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:49.472799 | orchestrator | 2026-03-05 01:19:49 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:49.473273 | orchestrator | 2026-03-05 01:19:49 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:52.531947 | orchestrator | 2026-03-05 01:19:52 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:52.533790 | orchestrator | 2026-03-05 01:19:52 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:52.533846 | orchestrator | 2026-03-05 01:19:52 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:55.586432 | orchestrator | 2026-03-05 01:19:55 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:55.587671 | orchestrator | 2026-03-05 01:19:55 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:55.587705 | orchestrator | 2026-03-05 01:19:55 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:19:58.642941 | orchestrator | 2026-03-05 01:19:58 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:19:58.645942 | orchestrator | 2026-03-05 01:19:58 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:19:58.645995 | orchestrator | 2026-03-05 01:19:58 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:01.694945 | orchestrator | 2026-03-05 01:20:01 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:01.696923 | orchestrator | 2026-03-05 01:20:01 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:20:01.697293 | orchestrator | 2026-03-05 01:20:01 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:04.739559 | orchestrator | 2026-03-05 01:20:04 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:04.740440 | orchestrator | 2026-03-05 01:20:04 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:20:04.740562 | orchestrator | 2026-03-05 01:20:04 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:07.793887 | orchestrator | 2026-03-05 01:20:07 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:07.795173 | orchestrator | 2026-03-05 01:20:07 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state STARTED 2026-03-05 01:20:07.795255 | orchestrator | 2026-03-05 01:20:07 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:10.849325 | orchestrator | 2026-03-05 01:20:10 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:10.964612 | orchestrator | 2026-03-05 01:20:10 | INFO  | Task 2e06ed28-a6ee-43fa-8f03-5470e9d6f109 is in state SUCCESS 2026-03-05 01:20:10.964672 | orchestrator | 2026-03-05 01:20:10.964681 | orchestrator | 2026-03-05 01:20:10.964689 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:20:10.964696 | orchestrator | 2026-03-05 01:20:10.964703 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:20:10.964710 | orchestrator | Thursday 05 March 2026 01:11:38 +0000 (0:00:00.211) 0:00:00.211 ******** 2026-03-05 01:20:10.964716 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.964725 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:20:10.964731 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:20:10.964737 | orchestrator | 2026-03-05 01:20:10.964744 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:20:10.964750 | orchestrator | Thursday 05 March 2026 01:11:38 +0000 (0:00:00.342) 0:00:00.554 ******** 2026-03-05 01:20:10.964757 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-05 01:20:10.964790 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-05 01:20:10.964809 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-05 01:20:10.964816 | orchestrator | 2026-03-05 01:20:10.964822 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-05 01:20:10.964829 | orchestrator | 2026-03-05 01:20:10.964835 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-05 01:20:10.964841 | orchestrator | Thursday 05 March 2026 01:11:39 +0000 (0:00:00.903) 0:00:01.457 ******** 2026-03-05 01:20:10.964847 | orchestrator | 2026-03-05 01:20:10.964854 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-05 01:20:10.964860 | orchestrator | 2026-03-05 01:20:10.964866 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-05 01:20:10.964872 | orchestrator | 2026-03-05 01:20:10.964879 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-05 01:20:10.964885 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.964891 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:20:10.964898 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:20:10.964904 | orchestrator | 2026-03-05 01:20:10.964910 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:20:10.964917 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:20:10.964926 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:20:10.964932 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:20:10.964939 | orchestrator | 2026-03-05 01:20:10.964945 | orchestrator | 2026-03-05 01:20:10.964952 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:20:10.964958 | orchestrator | Thursday 05 March 2026 01:15:36 +0000 (0:03:56.163) 0:03:57.621 ******** 2026-03-05 01:20:10.964965 | orchestrator | =============================================================================== 2026-03-05 01:20:10.964971 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 236.16s 2026-03-05 01:20:10.964977 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2026-03-05 01:20:10.964984 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-05 01:20:10.964990 | orchestrator | 2026-03-05 01:20:10.964996 | orchestrator | 2026-03-05 01:20:10.965002 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:20:10.965009 | orchestrator | 2026-03-05 01:20:10.965015 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-05 01:20:10.965021 | orchestrator | Thursday 05 March 2026 01:11:01 +0000 (0:00:00.575) 0:00:00.576 ******** 2026-03-05 01:20:10.965028 | orchestrator | changed: [testbed-manager] 2026-03-05 01:20:10.965035 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.965041 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:10.965048 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:10.965054 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.965060 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.965066 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.965073 | orchestrator | 2026-03-05 01:20:10.965079 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:20:10.965085 | orchestrator | Thursday 05 March 2026 01:11:02 +0000 (0:00:01.588) 0:00:02.164 ******** 2026-03-05 01:20:10.965092 | orchestrator | changed: [testbed-manager] 2026-03-05 01:20:10.965098 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.965104 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:10.965110 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:10.965116 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.965123 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.965129 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.965140 | orchestrator | 2026-03-05 01:20:10.965147 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:20:10.965153 | orchestrator | Thursday 05 March 2026 01:11:04 +0000 (0:00:01.765) 0:00:03.929 ******** 2026-03-05 01:20:10.965160 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-05 01:20:10.965167 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-05 01:20:10.965173 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-05 01:20:10.965179 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-05 01:20:10.965185 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-05 01:20:10.965191 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-05 01:20:10.965198 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-05 01:20:10.965204 | orchestrator | 2026-03-05 01:20:10.965210 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-05 01:20:10.965216 | orchestrator | 2026-03-05 01:20:10.965223 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-05 01:20:10.965239 | orchestrator | Thursday 05 March 2026 01:11:06 +0000 (0:00:01.642) 0:00:05.571 ******** 2026-03-05 01:20:10.965246 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:10.965253 | orchestrator | 2026-03-05 01:20:10.965259 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-05 01:20:10.965265 | orchestrator | Thursday 05 March 2026 01:11:07 +0000 (0:00:01.697) 0:00:07.269 ******** 2026-03-05 01:20:10.965271 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-05 01:20:10.965278 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-05 01:20:10.965284 | orchestrator | 2026-03-05 01:20:10.965290 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-05 01:20:10.965296 | orchestrator | Thursday 05 March 2026 01:11:13 +0000 (0:00:05.967) 0:00:13.236 ******** 2026-03-05 01:20:10.965302 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:20:10.965309 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-05 01:20:10.965315 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.965321 | orchestrator | 2026-03-05 01:20:10.965330 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-05 01:20:10.965337 | orchestrator | Thursday 05 March 2026 01:11:18 +0000 (0:00:04.771) 0:00:18.007 ******** 2026-03-05 01:20:10.965343 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.965349 | orchestrator | 2026-03-05 01:20:10.965356 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-05 01:20:10.965362 | orchestrator | Thursday 05 March 2026 01:11:19 +0000 (0:00:01.197) 0:00:19.205 ******** 2026-03-05 01:20:10.965368 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.965374 | orchestrator | 2026-03-05 01:20:10.965380 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-05 01:20:10.965402 | orchestrator | Thursday 05 March 2026 01:11:21 +0000 (0:00:02.084) 0:00:21.290 ******** 2026-03-05 01:20:10.965408 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.965414 | orchestrator | 2026-03-05 01:20:10.965421 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-05 01:20:10.965427 | orchestrator | Thursday 05 March 2026 01:11:27 +0000 (0:00:05.545) 0:00:26.835 ******** 2026-03-05 01:20:10.965433 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.965440 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.965446 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.965452 | orchestrator | 2026-03-05 01:20:10.965458 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-05 01:20:10.965464 | orchestrator | Thursday 05 March 2026 01:11:27 +0000 (0:00:00.327) 0:00:27.163 ******** 2026-03-05 01:20:10.965471 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.965477 | orchestrator | 2026-03-05 01:20:10.965483 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-05 01:20:10.965494 | orchestrator | Thursday 05 March 2026 01:12:05 +0000 (0:00:37.818) 0:01:04.981 ******** 2026-03-05 01:20:10.965500 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.965507 | orchestrator | 2026-03-05 01:20:10.965513 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-05 01:20:10.965519 | orchestrator | Thursday 05 March 2026 01:12:23 +0000 (0:00:17.537) 0:01:22.519 ******** 2026-03-05 01:20:10.965525 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.965532 | orchestrator | 2026-03-05 01:20:10.965538 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-05 01:20:10.965544 | orchestrator | Thursday 05 March 2026 01:12:37 +0000 (0:00:14.646) 0:01:37.165 ******** 2026-03-05 01:20:10.965550 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.965557 | orchestrator | 2026-03-05 01:20:10.965563 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-05 01:20:10.965569 | orchestrator | Thursday 05 March 2026 01:12:39 +0000 (0:00:01.287) 0:01:38.453 ******** 2026-03-05 01:20:10.965575 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.965582 | orchestrator | 2026-03-05 01:20:10.965588 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-05 01:20:10.965594 | orchestrator | Thursday 05 March 2026 01:12:39 +0000 (0:00:00.462) 0:01:38.916 ******** 2026-03-05 01:20:10.965600 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:10.965606 | orchestrator | 2026-03-05 01:20:10.965613 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-05 01:20:10.965619 | orchestrator | Thursday 05 March 2026 01:12:40 +0000 (0:00:00.581) 0:01:39.497 ******** 2026-03-05 01:20:10.965625 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.965632 | orchestrator | 2026-03-05 01:20:10.965638 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-05 01:20:10.965644 | orchestrator | Thursday 05 March 2026 01:12:59 +0000 (0:00:19.080) 0:01:58.577 ******** 2026-03-05 01:20:10.965650 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.965657 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.965663 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.965669 | orchestrator | 2026-03-05 01:20:10.965675 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-05 01:20:10.965681 | orchestrator | 2026-03-05 01:20:10.965687 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-05 01:20:10.965694 | orchestrator | Thursday 05 March 2026 01:12:59 +0000 (0:00:00.338) 0:01:58.916 ******** 2026-03-05 01:20:10.965700 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:10.965706 | orchestrator | 2026-03-05 01:20:10.965712 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-05 01:20:10.965719 | orchestrator | Thursday 05 March 2026 01:13:00 +0000 (0:00:00.675) 0:01:59.591 ******** 2026-03-05 01:20:10.965725 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.965731 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.965737 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.965744 | orchestrator | 2026-03-05 01:20:10.965750 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-05 01:20:10.965761 | orchestrator | Thursday 05 March 2026 01:13:02 +0000 (0:00:02.347) 0:02:01.939 ******** 2026-03-05 01:20:10.965767 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.965773 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.965780 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.965786 | orchestrator | 2026-03-05 01:20:10.965792 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-05 01:20:10.965798 | orchestrator | Thursday 05 March 2026 01:13:05 +0000 (0:00:02.535) 0:02:04.474 ******** 2026-03-05 01:20:10.965804 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.965811 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.965821 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.965828 | orchestrator | 2026-03-05 01:20:10.965834 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-05 01:20:10.965840 | orchestrator | Thursday 05 March 2026 01:13:05 +0000 (0:00:00.387) 0:02:04.862 ******** 2026-03-05 01:20:10.965846 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-05 01:20:10.965852 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.965859 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-05 01:20:10.965865 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.965875 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-05 01:20:10.965881 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-05 01:20:10.965888 | orchestrator | 2026-03-05 01:20:10.965894 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-05 01:20:10.965900 | orchestrator | Thursday 05 March 2026 01:13:13 +0000 (0:00:08.111) 0:02:12.973 ******** 2026-03-05 01:20:10.965906 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.965912 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.965919 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.965925 | orchestrator | 2026-03-05 01:20:10.965931 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-05 01:20:10.965937 | orchestrator | Thursday 05 March 2026 01:13:13 +0000 (0:00:00.349) 0:02:13.323 ******** 2026-03-05 01:20:10.965943 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-05 01:20:10.965950 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.965956 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-05 01:20:10.965962 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.965969 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-05 01:20:10.965975 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.965981 | orchestrator | 2026-03-05 01:20:10.965987 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-05 01:20:10.965993 | orchestrator | Thursday 05 March 2026 01:13:14 +0000 (0:00:00.775) 0:02:14.098 ******** 2026-03-05 01:20:10.966000 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966006 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966012 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.966059 | orchestrator | 2026-03-05 01:20:10.966065 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-05 01:20:10.966071 | orchestrator | Thursday 05 March 2026 01:13:15 +0000 (0:00:00.729) 0:02:14.828 ******** 2026-03-05 01:20:10.966078 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966084 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966090 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.966097 | orchestrator | 2026-03-05 01:20:10.966103 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-05 01:20:10.966109 | orchestrator | Thursday 05 March 2026 01:13:16 +0000 (0:00:00.937) 0:02:15.766 ******** 2026-03-05 01:20:10.966116 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966122 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966129 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.966135 | orchestrator | 2026-03-05 01:20:10.966141 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-05 01:20:10.966147 | orchestrator | Thursday 05 March 2026 01:13:18 +0000 (0:00:02.423) 0:02:18.189 ******** 2026-03-05 01:20:10.966153 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966160 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966166 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.966172 | orchestrator | 2026-03-05 01:20:10.966178 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-05 01:20:10.966184 | orchestrator | Thursday 05 March 2026 01:13:43 +0000 (0:00:24.334) 0:02:42.524 ******** 2026-03-05 01:20:10.966191 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966203 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966209 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.966215 | orchestrator | 2026-03-05 01:20:10.966221 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-05 01:20:10.966228 | orchestrator | Thursday 05 March 2026 01:13:58 +0000 (0:00:15.101) 0:02:57.626 ******** 2026-03-05 01:20:10.966234 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.966240 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966246 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966252 | orchestrator | 2026-03-05 01:20:10.966259 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-05 01:20:10.966265 | orchestrator | Thursday 05 March 2026 01:13:59 +0000 (0:00:01.104) 0:02:58.730 ******** 2026-03-05 01:20:10.966271 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966277 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966283 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.966290 | orchestrator | 2026-03-05 01:20:10.966296 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-05 01:20:10.966302 | orchestrator | Thursday 05 March 2026 01:14:14 +0000 (0:00:14.677) 0:03:13.407 ******** 2026-03-05 01:20:10.966308 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.966315 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966321 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966327 | orchestrator | 2026-03-05 01:20:10.966333 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-05 01:20:10.966339 | orchestrator | Thursday 05 March 2026 01:14:15 +0000 (0:00:01.251) 0:03:14.659 ******** 2026-03-05 01:20:10.966346 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.966352 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966358 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966364 | orchestrator | 2026-03-05 01:20:10.966375 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-05 01:20:10.966381 | orchestrator | 2026-03-05 01:20:10.966400 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-05 01:20:10.966407 | orchestrator | Thursday 05 March 2026 01:14:15 +0000 (0:00:00.626) 0:03:15.285 ******** 2026-03-05 01:20:10.966413 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:10.966419 | orchestrator | 2026-03-05 01:20:10.966425 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-05 01:20:10.966432 | orchestrator | Thursday 05 March 2026 01:14:16 +0000 (0:00:00.660) 0:03:15.945 ******** 2026-03-05 01:20:10.966438 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-05 01:20:10.966444 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-05 01:20:10.966451 | orchestrator | 2026-03-05 01:20:10.966457 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-05 01:20:10.966463 | orchestrator | Thursday 05 March 2026 01:14:20 +0000 (0:00:03.978) 0:03:19.923 ******** 2026-03-05 01:20:10.966473 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-05 01:20:10.966480 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-05 01:20:10.966486 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-05 01:20:10.966493 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-05 01:20:10.966499 | orchestrator | 2026-03-05 01:20:10.966505 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-05 01:20:10.966512 | orchestrator | Thursday 05 March 2026 01:14:28 +0000 (0:00:07.506) 0:03:27.429 ******** 2026-03-05 01:20:10.966518 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:20:10.966524 | orchestrator | 2026-03-05 01:20:10.966535 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-05 01:20:10.966542 | orchestrator | Thursday 05 March 2026 01:14:31 +0000 (0:00:03.573) 0:03:31.003 ******** 2026-03-05 01:20:10.966548 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:20:10.966554 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-05 01:20:10.966560 | orchestrator | 2026-03-05 01:20:10.966567 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-05 01:20:10.966573 | orchestrator | Thursday 05 March 2026 01:14:36 +0000 (0:00:04.431) 0:03:35.434 ******** 2026-03-05 01:20:10.966579 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:20:10.966585 | orchestrator | 2026-03-05 01:20:10.966591 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-05 01:20:10.966598 | orchestrator | Thursday 05 March 2026 01:14:39 +0000 (0:00:03.538) 0:03:38.972 ******** 2026-03-05 01:20:10.966604 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-05 01:20:10.966610 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-05 01:20:10.966617 | orchestrator | 2026-03-05 01:20:10.966623 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-05 01:20:10.966629 | orchestrator | Thursday 05 March 2026 01:14:47 +0000 (0:00:08.068) 0:03:47.040 ******** 2026-03-05 01:20:10.966641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.966670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.966679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.966692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.966701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.966717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.966731 | orchestrator | 2026-03-05 01:20:10.966738 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-05 01:20:10.966745 | orchestrator | Thursday 05 March 2026 01:14:49 +0000 (0:00:01.422) 0:03:48.463 ******** 2026-03-05 01:20:10.966751 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.966757 | orchestrator | 2026-03-05 01:20:10.966769 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-05 01:20:10.966776 | orchestrator | Thursday 05 March 2026 01:14:49 +0000 (0:00:00.153) 0:03:48.616 ******** 2026-03-05 01:20:10.966782 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.966788 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966795 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966801 | orchestrator | 2026-03-05 01:20:10.966807 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-05 01:20:10.966814 | orchestrator | Thursday 05 March 2026 01:14:49 +0000 (0:00:00.315) 0:03:48.931 ******** 2026-03-05 01:20:10.966825 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-05 01:20:10.966831 | orchestrator | 2026-03-05 01:20:10.966837 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-05 01:20:10.966844 | orchestrator | Thursday 05 March 2026 01:14:50 +0000 (0:00:00.982) 0:03:49.914 ******** 2026-03-05 01:20:10.966850 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.966856 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.966863 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.966869 | orchestrator | 2026-03-05 01:20:10.966879 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-05 01:20:10.966886 | orchestrator | Thursday 05 March 2026 01:14:50 +0000 (0:00:00.367) 0:03:50.281 ******** 2026-03-05 01:20:10.966892 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:10.966899 | orchestrator | 2026-03-05 01:20:10.966905 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-05 01:20:10.966911 | orchestrator | Thursday 05 March 2026 01:14:51 +0000 (0:00:00.717) 0:03:50.998 ******** 2026-03-05 01:20:10.966918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.966926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.966946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.966959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.966966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.966972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.966979 | orchestrator | 2026-03-05 01:20:10.966985 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-05 01:20:10.966992 | orchestrator | Thursday 05 March 2026 01:14:54 +0000 (0:00:02.829) 0:03:53.828 ******** 2026-03-05 01:20:10.966999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:20:10.967016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.967023 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.967033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:20:10.967041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.967047 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.967054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:20:10.967072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.967079 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.967085 | orchestrator | 2026-03-05 01:20:10.967092 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-05 01:20:10.967098 | orchestrator | Thursday 05 March 2026 01:14:55 +0000 (0:00:00.592) 0:03:54.420 ******** 2026-03-05 01:20:10.967108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:20:10.967116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.967122 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.967129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:20:10.967145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.967151 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.967162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:20:10.967169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.967175 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.967181 | orchestrator | 2026-03-05 01:20:10.967188 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-05 01:20:10.967194 | orchestrator | Thursday 05 March 2026 01:14:56 +0000 (0:00:00.913) 0:03:55.334 ******** 2026-03-05 01:20:10.967201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.967217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.967230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.967242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.967254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.967265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.967282 | orchestrator | 2026-03-05 01:20:10.967293 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-05 01:20:10.967304 | orchestrator | Thursday 05 March 2026 01:14:58 +0000 (0:00:02.775) 0:03:58.109 ******** 2026-03-05 01:20:10.967378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.967432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.967440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.967453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.967465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.967476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.967482 | orchestrator | 2026-03-05 01:20:10.967489 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-05 01:20:10.967495 | orchestrator | Thursday 05 March 2026 01:15:04 +0000 (0:00:05.950) 0:04:04.060 ******** 2026-03-05 01:20:10.967502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:20:10.967509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.967520 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.967533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:20:10.967546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.967552 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.967559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-05 01:20:10.967566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.967577 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.967583 | orchestrator | 2026-03-05 01:20:10.967589 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-05 01:20:10.967596 | orchestrator | Thursday 05 March 2026 01:15:05 +0000 (0:00:00.725) 0:04:04.785 ******** 2026-03-05 01:20:10.967602 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:10.967609 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.967615 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:10.967621 | orchestrator | 2026-03-05 01:20:10.967627 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-05 01:20:10.967634 | orchestrator | Thursday 05 March 2026 01:15:07 +0000 (0:00:01.655) 0:04:06.441 ******** 2026-03-05 01:20:10.967640 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.967646 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.967652 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.967659 | orchestrator | 2026-03-05 01:20:10.967665 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-05 01:20:10.967671 | orchestrator | Thursday 05 March 2026 01:15:07 +0000 (0:00:00.367) 0:04:06.809 ******** 2026-03-05 01:20:10.967684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.967695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.967707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:10.967715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.967728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.967738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.967746 | orchestrator | 2026-03-05 01:20:10.967752 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-05 01:20:10.967759 | orchestrator | Thursday 05 March 2026 01:15:10 +0000 (0:00:02.580) 0:04:09.389 ******** 2026-03-05 01:20:10.967765 | orchestrator | 2026-03-05 01:20:10.967771 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-05 01:20:10.967778 | orchestrator | Thursday 05 March 2026 01:15:10 +0000 (0:00:00.144) 0:04:09.534 ******** 2026-03-05 01:20:10.967784 | orchestrator | 2026-03-05 01:20:10.967790 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-05 01:20:10.967797 | orchestrator | Thursday 05 March 2026 01:15:10 +0000 (0:00:00.140) 0:04:09.675 ******** 2026-03-05 01:20:10.967803 | orchestrator | 2026-03-05 01:20:10.967819 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-05 01:20:10.967825 | orchestrator | Thursday 05 March 2026 01:15:10 +0000 (0:00:00.148) 0:04:09.824 ******** 2026-03-05 01:20:10.967844 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.967851 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:10.967857 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:10.967863 | orchestrator | 2026-03-05 01:20:10.967869 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-05 01:20:10.967876 | orchestrator | Thursday 05 March 2026 01:15:27 +0000 (0:00:17.300) 0:04:27.124 ******** 2026-03-05 01:20:10.967883 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.967890 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:10.967897 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:10.967904 | orchestrator | 2026-03-05 01:20:10.967912 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-05 01:20:10.967919 | orchestrator | 2026-03-05 01:20:10.967926 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:20:10.967934 | orchestrator | Thursday 05 March 2026 01:15:38 +0000 (0:00:10.828) 0:04:37.953 ******** 2026-03-05 01:20:10.967941 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:10.967949 | orchestrator | 2026-03-05 01:20:10.967956 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:20:10.967963 | orchestrator | Thursday 05 March 2026 01:15:39 +0000 (0:00:01.331) 0:04:39.285 ******** 2026-03-05 01:20:10.967971 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.967979 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.967992 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.968005 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.968017 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.968029 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.968041 | orchestrator | 2026-03-05 01:20:10.968053 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-05 01:20:10.968065 | orchestrator | Thursday 05 March 2026 01:15:40 +0000 (0:00:00.589) 0:04:39.874 ******** 2026-03-05 01:20:10.968076 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.968088 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.968101 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.968112 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:20:10.968124 | orchestrator | 2026-03-05 01:20:10.968136 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-05 01:20:10.968147 | orchestrator | Thursday 05 March 2026 01:15:41 +0000 (0:00:01.127) 0:04:41.002 ******** 2026-03-05 01:20:10.968160 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-05 01:20:10.968172 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-05 01:20:10.968186 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-05 01:20:10.968197 | orchestrator | 2026-03-05 01:20:10.968209 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-05 01:20:10.968222 | orchestrator | Thursday 05 March 2026 01:15:42 +0000 (0:00:00.661) 0:04:41.663 ******** 2026-03-05 01:20:10.968236 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-05 01:20:10.968250 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-05 01:20:10.968262 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-05 01:20:10.968275 | orchestrator | 2026-03-05 01:20:10.968288 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-05 01:20:10.968301 | orchestrator | Thursday 05 March 2026 01:15:43 +0000 (0:00:01.408) 0:04:43.072 ******** 2026-03-05 01:20:10.968314 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-05 01:20:10.968327 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.968341 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-05 01:20:10.968354 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.968439 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-05 01:20:10.968454 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.968466 | orchestrator | 2026-03-05 01:20:10.968478 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-05 01:20:10.968491 | orchestrator | Thursday 05 March 2026 01:15:44 +0000 (0:00:00.552) 0:04:43.624 ******** 2026-03-05 01:20:10.968503 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 01:20:10.968516 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 01:20:10.968528 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.968541 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 01:20:10.968554 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 01:20:10.968567 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.968580 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-05 01:20:10.968600 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-05 01:20:10.968613 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-05 01:20:10.968625 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.968637 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-05 01:20:10.968649 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-05 01:20:10.968662 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-05 01:20:10.968674 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-05 01:20:10.968686 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-05 01:20:10.968698 | orchestrator | 2026-03-05 01:20:10.968711 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-05 01:20:10.968723 | orchestrator | Thursday 05 March 2026 01:15:47 +0000 (0:00:03.323) 0:04:46.947 ******** 2026-03-05 01:20:10.968735 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.968747 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.968759 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.968771 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.968793 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.968805 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.968817 | orchestrator | 2026-03-05 01:20:10.968829 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-05 01:20:10.968841 | orchestrator | Thursday 05 March 2026 01:15:48 +0000 (0:00:01.318) 0:04:48.265 ******** 2026-03-05 01:20:10.968853 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.968865 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.968877 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.968889 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.968900 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.968912 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.968924 | orchestrator | 2026-03-05 01:20:10.968936 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-05 01:20:10.968948 | orchestrator | Thursday 05 March 2026 01:15:50 +0000 (0:00:01.980) 0:04:50.246 ******** 2026-03-05 01:20:10.968961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.968987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969486 | orchestrator | 2026-03-05 01:20:10.969498 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:20:10.969510 | orchestrator | Thursday 05 March 2026 01:15:53 +0000 (0:00:02.263) 0:04:52.509 ******** 2026-03-05 01:20:10.969532 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:10.969547 | orchestrator | 2026-03-05 01:20:10.969559 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-05 01:20:10.969572 | orchestrator | Thursday 05 March 2026 01:15:54 +0000 (0:00:01.415) 0:04:53.925 ******** 2026-03-05 01:20:10.969585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.969841 | orchestrator | 2026-03-05 01:20:10.969853 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-05 01:20:10.969865 | orchestrator | Thursday 05 March 2026 01:15:58 +0000 (0:00:03.906) 0:04:57.831 ******** 2026-03-05 01:20:10.969883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.969896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.969913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.969926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.969945 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.969958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.969970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.969983 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.970003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:20:10.970056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970074 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.970090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.970111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.970126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970140 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.970156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:20:10.970179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970193 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.970213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:20:10.970235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970250 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.970264 | orchestrator | 2026-03-05 01:20:10.970278 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-05 01:20:10.970292 | orchestrator | Thursday 05 March 2026 01:16:00 +0000 (0:00:01.599) 0:04:59.431 ******** 2026-03-05 01:20:10.970307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.970322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.970342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970357 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.970376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.970420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.970434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970447 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.970460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.970473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.970494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970508 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.970534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:20:10.970548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970561 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.970575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:20:10.970588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970601 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.970614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:20:10.970633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.970648 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.970660 | orchestrator | 2026-03-05 01:20:10.970681 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:20:10.970694 | orchestrator | Thursday 05 March 2026 01:16:02 +0000 (0:00:02.307) 0:05:01.739 ******** 2026-03-05 01:20:10.970707 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.970720 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.970733 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.970746 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-05 01:20:10.970758 | orchestrator | 2026-03-05 01:20:10.970780 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-05 01:20:10.970793 | orchestrator | Thursday 05 March 2026 01:16:03 +0000 (0:00:01.091) 0:05:02.830 ******** 2026-03-05 01:20:10.970806 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 01:20:10.970819 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 01:20:10.970832 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 01:20:10.970844 | orchestrator | 2026-03-05 01:20:10.970857 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-05 01:20:10.970870 | orchestrator | Thursday 05 March 2026 01:16:04 +0000 (0:00:00.989) 0:05:03.819 ******** 2026-03-05 01:20:10.970883 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 01:20:10.970896 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-05 01:20:10.970908 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-05 01:20:10.970921 | orchestrator | 2026-03-05 01:20:10.970934 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-05 01:20:10.970946 | orchestrator | Thursday 05 March 2026 01:16:05 +0000 (0:00:01.047) 0:05:04.866 ******** 2026-03-05 01:20:10.970959 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:20:10.970973 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:20:10.970985 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:20:10.970998 | orchestrator | 2026-03-05 01:20:10.971010 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-05 01:20:10.971023 | orchestrator | Thursday 05 March 2026 01:16:06 +0000 (0:00:00.567) 0:05:05.434 ******** 2026-03-05 01:20:10.971036 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:20:10.971048 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:20:10.971061 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:20:10.971073 | orchestrator | 2026-03-05 01:20:10.971086 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-05 01:20:10.971099 | orchestrator | Thursday 05 March 2026 01:16:06 +0000 (0:00:00.775) 0:05:06.209 ******** 2026-03-05 01:20:10.971112 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-05 01:20:10.971125 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-05 01:20:10.971138 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-05 01:20:10.971150 | orchestrator | 2026-03-05 01:20:10.971163 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-05 01:20:10.971176 | orchestrator | Thursday 05 March 2026 01:16:08 +0000 (0:00:01.267) 0:05:07.476 ******** 2026-03-05 01:20:10.971188 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-05 01:20:10.971202 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-05 01:20:10.971214 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-05 01:20:10.971225 | orchestrator | 2026-03-05 01:20:10.971237 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-05 01:20:10.971249 | orchestrator | Thursday 05 March 2026 01:16:09 +0000 (0:00:01.316) 0:05:08.793 ******** 2026-03-05 01:20:10.971260 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-05 01:20:10.971271 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-05 01:20:10.971283 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-05 01:20:10.971296 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-05 01:20:10.971309 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-05 01:20:10.971594 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-05 01:20:10.971612 | orchestrator | 2026-03-05 01:20:10.971625 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-05 01:20:10.971638 | orchestrator | Thursday 05 March 2026 01:16:13 +0000 (0:00:03.955) 0:05:12.748 ******** 2026-03-05 01:20:10.971651 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.971664 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.971676 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.971689 | orchestrator | 2026-03-05 01:20:10.971701 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-05 01:20:10.971714 | orchestrator | Thursday 05 March 2026 01:16:14 +0000 (0:00:00.640) 0:05:13.389 ******** 2026-03-05 01:20:10.971726 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.971739 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.971752 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.971764 | orchestrator | 2026-03-05 01:20:10.971776 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-05 01:20:10.971789 | orchestrator | Thursday 05 March 2026 01:16:14 +0000 (0:00:00.389) 0:05:13.778 ******** 2026-03-05 01:20:10.971802 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.971815 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.971827 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.971840 | orchestrator | 2026-03-05 01:20:10.971852 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-05 01:20:10.971865 | orchestrator | Thursday 05 March 2026 01:16:15 +0000 (0:00:01.367) 0:05:15.146 ******** 2026-03-05 01:20:10.971887 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-05 01:20:10.971901 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-05 01:20:10.971914 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-05 01:20:10.971927 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-05 01:20:10.971939 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-05 01:20:10.971959 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-05 01:20:10.971971 | orchestrator | 2026-03-05 01:20:10.971984 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-05 01:20:10.971997 | orchestrator | Thursday 05 March 2026 01:16:19 +0000 (0:00:03.511) 0:05:18.658 ******** 2026-03-05 01:20:10.972010 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 01:20:10.972023 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 01:20:10.972035 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 01:20:10.972048 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-05 01:20:10.972060 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.972073 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-05 01:20:10.972085 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.972098 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-05 01:20:10.972110 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.972123 | orchestrator | 2026-03-05 01:20:10.972136 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-05 01:20:10.972148 | orchestrator | Thursday 05 March 2026 01:16:22 +0000 (0:00:03.450) 0:05:22.109 ******** 2026-03-05 01:20:10.972161 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.972173 | orchestrator | 2026-03-05 01:20:10.972186 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-05 01:20:10.972207 | orchestrator | Thursday 05 March 2026 01:16:22 +0000 (0:00:00.168) 0:05:22.277 ******** 2026-03-05 01:20:10.972220 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.972232 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.972245 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.972257 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.972270 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.972282 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.972295 | orchestrator | 2026-03-05 01:20:10.972308 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-05 01:20:10.972320 | orchestrator | Thursday 05 March 2026 01:16:23 +0000 (0:00:00.603) 0:05:22.880 ******** 2026-03-05 01:20:10.972333 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-05 01:20:10.972345 | orchestrator | 2026-03-05 01:20:10.972358 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-05 01:20:10.972370 | orchestrator | Thursday 05 March 2026 01:16:24 +0000 (0:00:00.751) 0:05:23.632 ******** 2026-03-05 01:20:10.972402 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.972416 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.972429 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.972442 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.972454 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.972467 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.972479 | orchestrator | 2026-03-05 01:20:10.972492 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-05 01:20:10.972504 | orchestrator | Thursday 05 March 2026 01:16:25 +0000 (0:00:00.827) 0:05:24.459 ******** 2026-03-05 01:20:10.972519 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972540 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972718 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972777 | orchestrator | 2026-03-05 01:20:10.972790 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-05 01:20:10.972803 | orchestrator | Thursday 05 March 2026 01:16:28 +0000 (0:00:03.748) 0:05:28.207 ******** 2026-03-05 01:20:10.972816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.972830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.972843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.972863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.972881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.972902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.972916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.972987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.973000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.973013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.973026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.973040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.973053 | orchestrator | 2026-03-05 01:20:10.973065 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-05 01:20:10.973078 | orchestrator | Thursday 05 March 2026 01:16:36 +0000 (0:00:07.406) 0:05:35.614 ******** 2026-03-05 01:20:10.973091 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.973104 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.973116 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.973129 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.973141 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.973154 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.973166 | orchestrator | 2026-03-05 01:20:10.973179 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-05 01:20:10.973203 | orchestrator | Thursday 05 March 2026 01:16:37 +0000 (0:00:01.325) 0:05:36.940 ******** 2026-03-05 01:20:10.973216 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-05 01:20:10.973228 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-05 01:20:10.973246 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-05 01:20:10.973259 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-05 01:20:10.973271 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-05 01:20:10.973284 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-05 01:20:10.973297 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-05 01:20:10.973309 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.973321 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-05 01:20:10.973334 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.973355 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-05 01:20:10.973368 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.973381 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-05 01:20:10.973415 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-05 01:20:10.973429 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-05 01:20:10.973442 | orchestrator | 2026-03-05 01:20:10.973455 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-05 01:20:10.973468 | orchestrator | Thursday 05 March 2026 01:16:41 +0000 (0:00:03.861) 0:05:40.801 ******** 2026-03-05 01:20:10.973481 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.973494 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.973507 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.973520 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.973533 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.973546 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.973559 | orchestrator | 2026-03-05 01:20:10.973572 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-05 01:20:10.973585 | orchestrator | Thursday 05 March 2026 01:16:42 +0000 (0:00:00.624) 0:05:41.425 ******** 2026-03-05 01:20:10.973598 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-05 01:20:10.973611 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-05 01:20:10.973624 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-05 01:20:10.973638 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-05 01:20:10.973651 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-05 01:20:10.973664 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-05 01:20:10.973677 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-05 01:20:10.973690 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-05 01:20:10.973702 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-05 01:20:10.973721 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-05 01:20:10.973733 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.973744 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-05 01:20:10.973756 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.973769 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-05 01:20:10.973783 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.973796 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:20:10.973809 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:20:10.973821 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:20:10.973834 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:20:10.973846 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:20:10.973860 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-05 01:20:10.973873 | orchestrator | 2026-03-05 01:20:10.973886 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-05 01:20:10.973899 | orchestrator | Thursday 05 March 2026 01:16:47 +0000 (0:00:05.636) 0:05:47.062 ******** 2026-03-05 01:20:10.974005 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 01:20:10.974053 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 01:20:10.974066 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-05 01:20:10.974079 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:20:10.974091 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-05 01:20:10.974104 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:20:10.974116 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-05 01:20:10.974129 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-05 01:20:10.974148 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-05 01:20:10.974161 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 01:20:10.974173 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 01:20:10.974185 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-05 01:20:10.974197 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.974210 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-05 01:20:10.974223 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-05 01:20:10.974235 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.974248 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:20:10.974261 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-05 01:20:10.974273 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.974284 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:20:10.974297 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-05 01:20:10.974318 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:20:10.974331 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:20:10.974344 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-05 01:20:10.974356 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:20:10.974368 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:20:10.974380 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-05 01:20:10.974415 | orchestrator | 2026-03-05 01:20:10.974427 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-05 01:20:10.974439 | orchestrator | Thursday 05 March 2026 01:16:55 +0000 (0:00:08.086) 0:05:55.148 ******** 2026-03-05 01:20:10.974451 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.974464 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.974477 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.974490 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.974502 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.974514 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.974526 | orchestrator | 2026-03-05 01:20:10.974537 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-05 01:20:10.974551 | orchestrator | Thursday 05 March 2026 01:16:56 +0000 (0:00:00.751) 0:05:55.900 ******** 2026-03-05 01:20:10.974564 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.974576 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.974589 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.974601 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.974613 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.974625 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.974639 | orchestrator | 2026-03-05 01:20:10.974651 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-05 01:20:10.974664 | orchestrator | Thursday 05 March 2026 01:16:57 +0000 (0:00:00.574) 0:05:56.474 ******** 2026-03-05 01:20:10.974677 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.974689 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.974702 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.974715 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.974727 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.974739 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.974752 | orchestrator | 2026-03-05 01:20:10.974765 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-05 01:20:10.974777 | orchestrator | Thursday 05 March 2026 01:16:59 +0000 (0:00:02.077) 0:05:58.551 ******** 2026-03-05 01:20:10.974799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.974819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.974843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.974856 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.974869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.974882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.974896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.974913 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.974933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-05 01:20:10.974953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-05 01:20:10.974967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.974980 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.974993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:20:10.975006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.975019 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.975037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:20:10.975061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-05 01:20:10.975073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.975085 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.975097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-05 01:20:10.975109 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.975121 | orchestrator | 2026-03-05 01:20:10.975133 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-05 01:20:10.975145 | orchestrator | Thursday 05 March 2026 01:17:00 +0000 (0:00:01.387) 0:05:59.939 ******** 2026-03-05 01:20:10.975157 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-05 01:20:10.975169 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-05 01:20:10.975180 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.975192 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-05 01:20:10.975203 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-05 01:20:10.975214 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.975225 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-05 01:20:10.975236 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-05 01:20:10.975247 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.975259 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-05 01:20:10.975270 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-05 01:20:10.975282 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.975293 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-05 01:20:10.975305 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-05 01:20:10.975317 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.975328 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-05 01:20:10.975339 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-05 01:20:10.975351 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.975362 | orchestrator | 2026-03-05 01:20:10.975374 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-05 01:20:10.975408 | orchestrator | Thursday 05 March 2026 01:17:01 +0000 (0:00:01.086) 0:06:01.026 ******** 2026-03-05 01:20:10.975426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975477 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:10.975663 | orchestrator | 2026-03-05 01:20:10.975675 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-05 01:20:10.975687 | orchestrator | Thursday 05 March 2026 01:17:04 +0000 (0:00:03.182) 0:06:04.209 ******** 2026-03-05 01:20:10.975699 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.975711 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.975722 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.975734 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.975745 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.975757 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.975768 | orchestrator | 2026-03-05 01:20:10.975780 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:20:10.975792 | orchestrator | Thursday 05 March 2026 01:17:05 +0000 (0:00:00.917) 0:06:05.126 ******** 2026-03-05 01:20:10.975804 | orchestrator | 2026-03-05 01:20:10.975816 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:20:10.975827 | orchestrator | Thursday 05 March 2026 01:17:05 +0000 (0:00:00.138) 0:06:05.265 ******** 2026-03-05 01:20:10.975839 | orchestrator | 2026-03-05 01:20:10.975851 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:20:10.975862 | orchestrator | Thursday 05 March 2026 01:17:06 +0000 (0:00:00.150) 0:06:05.416 ******** 2026-03-05 01:20:10.975874 | orchestrator | 2026-03-05 01:20:10.975893 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:20:10.975905 | orchestrator | Thursday 05 March 2026 01:17:06 +0000 (0:00:00.145) 0:06:05.561 ******** 2026-03-05 01:20:10.975917 | orchestrator | 2026-03-05 01:20:10.975929 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:20:10.975940 | orchestrator | Thursday 05 March 2026 01:17:06 +0000 (0:00:00.133) 0:06:05.695 ******** 2026-03-05 01:20:10.975952 | orchestrator | 2026-03-05 01:20:10.975964 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-05 01:20:10.975975 | orchestrator | Thursday 05 March 2026 01:17:06 +0000 (0:00:00.133) 0:06:05.828 ******** 2026-03-05 01:20:10.975987 | orchestrator | 2026-03-05 01:20:10.975999 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-05 01:20:10.976010 | orchestrator | Thursday 05 March 2026 01:17:06 +0000 (0:00:00.335) 0:06:06.163 ******** 2026-03-05 01:20:10.976022 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.976034 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:10.976046 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:10.976057 | orchestrator | 2026-03-05 01:20:10.976069 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-05 01:20:10.976080 | orchestrator | Thursday 05 March 2026 01:17:14 +0000 (0:00:07.199) 0:06:13.363 ******** 2026-03-05 01:20:10.976092 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.976104 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:10.976115 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:10.976127 | orchestrator | 2026-03-05 01:20:10.976139 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-05 01:20:10.976151 | orchestrator | Thursday 05 March 2026 01:17:27 +0000 (0:00:13.480) 0:06:26.843 ******** 2026-03-05 01:20:10.976162 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.976174 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.976185 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.976197 | orchestrator | 2026-03-05 01:20:10.976209 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-05 01:20:10.976220 | orchestrator | Thursday 05 March 2026 01:17:50 +0000 (0:00:23.397) 0:06:50.240 ******** 2026-03-05 01:20:10.976232 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.976244 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.976255 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.976267 | orchestrator | 2026-03-05 01:20:10.976278 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-05 01:20:10.976294 | orchestrator | Thursday 05 March 2026 01:18:20 +0000 (0:00:29.358) 0:07:19.599 ******** 2026-03-05 01:20:10.976306 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-05 01:20:10.976319 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-05 01:20:10.976330 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-05 01:20:10.976342 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.976354 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.976366 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.976377 | orchestrator | 2026-03-05 01:20:10.976437 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-05 01:20:10.976450 | orchestrator | Thursday 05 March 2026 01:18:26 +0000 (0:00:06.290) 0:07:25.889 ******** 2026-03-05 01:20:10.976462 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.976479 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.976491 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.976502 | orchestrator | 2026-03-05 01:20:10.976514 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-05 01:20:10.976526 | orchestrator | Thursday 05 March 2026 01:18:27 +0000 (0:00:00.818) 0:07:26.708 ******** 2026-03-05 01:20:10.976537 | orchestrator | changed: [testbed-node-4] 2026-03-05 01:20:10.976556 | orchestrator | changed: [testbed-node-3] 2026-03-05 01:20:10.976568 | orchestrator | changed: [testbed-node-5] 2026-03-05 01:20:10.976579 | orchestrator | 2026-03-05 01:20:10.976591 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-05 01:20:10.976602 | orchestrator | Thursday 05 March 2026 01:18:53 +0000 (0:00:25.888) 0:07:52.597 ******** 2026-03-05 01:20:10.976614 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.976626 | orchestrator | 2026-03-05 01:20:10.976637 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-05 01:20:10.976649 | orchestrator | Thursday 05 March 2026 01:18:53 +0000 (0:00:00.207) 0:07:52.804 ******** 2026-03-05 01:20:10.976661 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.976672 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.976684 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.976695 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.976706 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.976718 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-05 01:20:10.976730 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:20:10.976742 | orchestrator | 2026-03-05 01:20:10.976753 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-05 01:20:10.976765 | orchestrator | Thursday 05 March 2026 01:19:16 +0000 (0:00:23.025) 0:08:15.830 ******** 2026-03-05 01:20:10.976777 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.976788 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.976800 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.976812 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.976824 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.976835 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.976847 | orchestrator | 2026-03-05 01:20:10.976858 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-05 01:20:10.976870 | orchestrator | Thursday 05 March 2026 01:19:27 +0000 (0:00:11.021) 0:08:26.852 ******** 2026-03-05 01:20:10.976882 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.976893 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.976905 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.976916 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.976928 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.976940 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-05 01:20:10.976951 | orchestrator | 2026-03-05 01:20:10.976963 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-05 01:20:10.976975 | orchestrator | Thursday 05 March 2026 01:19:31 +0000 (0:00:04.418) 0:08:31.271 ******** 2026-03-05 01:20:10.976986 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:20:10.976998 | orchestrator | 2026-03-05 01:20:10.977010 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-05 01:20:10.977021 | orchestrator | Thursday 05 March 2026 01:19:45 +0000 (0:00:13.863) 0:08:45.134 ******** 2026-03-05 01:20:10.977033 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:20:10.977045 | orchestrator | 2026-03-05 01:20:10.977056 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-05 01:20:10.977068 | orchestrator | Thursday 05 March 2026 01:19:47 +0000 (0:00:01.258) 0:08:46.392 ******** 2026-03-05 01:20:10.977079 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.977091 | orchestrator | 2026-03-05 01:20:10.977102 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-05 01:20:10.977114 | orchestrator | Thursday 05 March 2026 01:19:48 +0000 (0:00:01.489) 0:08:47.882 ******** 2026-03-05 01:20:10.977125 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-05 01:20:10.977137 | orchestrator | 2026-03-05 01:20:10.977149 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-05 01:20:10.977168 | orchestrator | Thursday 05 March 2026 01:20:01 +0000 (0:00:12.816) 0:09:00.699 ******** 2026-03-05 01:20:10.977178 | orchestrator | ok: [testbed-node-3] 2026-03-05 01:20:10.977189 | orchestrator | ok: [testbed-node-4] 2026-03-05 01:20:10.977201 | orchestrator | ok: [testbed-node-5] 2026-03-05 01:20:10.977212 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:20:10.977224 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:10.977236 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:20:10.977247 | orchestrator | 2026-03-05 01:20:10.977259 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-05 01:20:10.977270 | orchestrator | 2026-03-05 01:20:10.977282 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-05 01:20:10.977294 | orchestrator | Thursday 05 March 2026 01:20:03 +0000 (0:00:02.091) 0:09:02.790 ******** 2026-03-05 01:20:10.977310 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:10.977322 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:10.977334 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:10.977345 | orchestrator | 2026-03-05 01:20:10.977357 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-05 01:20:10.977369 | orchestrator | 2026-03-05 01:20:10.977381 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-05 01:20:10.977410 | orchestrator | Thursday 05 March 2026 01:20:04 +0000 (0:00:01.195) 0:09:03.986 ******** 2026-03-05 01:20:10.977422 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.977433 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.977445 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.977457 | orchestrator | 2026-03-05 01:20:10.977468 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-05 01:20:10.977480 | orchestrator | 2026-03-05 01:20:10.977491 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-05 01:20:10.977508 | orchestrator | Thursday 05 March 2026 01:20:05 +0000 (0:00:00.508) 0:09:04.495 ******** 2026-03-05 01:20:10.977519 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-05 01:20:10.977531 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-05 01:20:10.977543 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-05 01:20:10.977554 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-05 01:20:10.977565 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-05 01:20:10.977576 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-05 01:20:10.977587 | orchestrator | skipping: [testbed-node-3] 2026-03-05 01:20:10.977598 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-05 01:20:10.977609 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-05 01:20:10.977620 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-05 01:20:10.977631 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-05 01:20:10.977642 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-05 01:20:10.977653 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-05 01:20:10.977664 | orchestrator | skipping: [testbed-node-4] 2026-03-05 01:20:10.977675 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-05 01:20:10.977685 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-05 01:20:10.977697 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-05 01:20:10.977708 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-05 01:20:10.977718 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-05 01:20:10.977729 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-05 01:20:10.977740 | orchestrator | skipping: [testbed-node-5] 2026-03-05 01:20:10.977751 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-05 01:20:10.977770 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-05 01:20:10.977781 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-05 01:20:10.977792 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-05 01:20:10.977803 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-05 01:20:10.977814 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-05 01:20:10.977825 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.977836 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-05 01:20:10.977847 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-05 01:20:10.977858 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-05 01:20:10.977869 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-05 01:20:10.977879 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-05 01:20:10.977890 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-05 01:20:10.977901 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.977912 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-05 01:20:10.977923 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-05 01:20:10.977934 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-05 01:20:10.977945 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-05 01:20:10.977955 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-05 01:20:10.977966 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-05 01:20:10.977977 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.977987 | orchestrator | 2026-03-05 01:20:10.977998 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-05 01:20:10.978009 | orchestrator | 2026-03-05 01:20:10.978046 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-05 01:20:10.978058 | orchestrator | Thursday 05 March 2026 01:20:06 +0000 (0:00:01.592) 0:09:06.088 ******** 2026-03-05 01:20:10.978069 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-05 01:20:10.978079 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-05 01:20:10.978091 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.978101 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-05 01:20:10.978112 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-05 01:20:10.978123 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.978133 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-05 01:20:10.978144 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-05 01:20:10.978155 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.978165 | orchestrator | 2026-03-05 01:20:10.978188 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-05 01:20:10.978199 | orchestrator | 2026-03-05 01:20:10.978210 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-05 01:20:10.978221 | orchestrator | Thursday 05 March 2026 01:20:07 +0000 (0:00:00.856) 0:09:06.944 ******** 2026-03-05 01:20:10.978231 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.978242 | orchestrator | 2026-03-05 01:20:10.978253 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-05 01:20:10.978264 | orchestrator | 2026-03-05 01:20:10.978275 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-05 01:20:10.978286 | orchestrator | Thursday 05 March 2026 01:20:08 +0000 (0:00:00.687) 0:09:07.632 ******** 2026-03-05 01:20:10.978297 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:10.978307 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:10.978318 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:10.978329 | orchestrator | 2026-03-05 01:20:10.978340 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:20:10.978362 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:20:10.978374 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-05 01:20:10.978431 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-05 01:20:10.978443 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-05 01:20:10.978454 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-05 01:20:10.978465 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-05 01:20:10.978476 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-05 01:20:10.978487 | orchestrator | 2026-03-05 01:20:10.978498 | orchestrator | 2026-03-05 01:20:10.978509 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:20:10.978520 | orchestrator | Thursday 05 March 2026 01:20:08 +0000 (0:00:00.489) 0:09:08.121 ******** 2026-03-05 01:20:10.978531 | orchestrator | =============================================================================== 2026-03-05 01:20:10.978541 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 37.82s 2026-03-05 01:20:10.978552 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 29.36s 2026-03-05 01:20:10.978563 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.89s 2026-03-05 01:20:10.978574 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 24.33s 2026-03-05 01:20:10.978585 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.40s 2026-03-05 01:20:10.978596 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.03s 2026-03-05 01:20:10.978607 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.08s 2026-03-05 01:20:10.978618 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.54s 2026-03-05 01:20:10.978629 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.30s 2026-03-05 01:20:10.978639 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.10s 2026-03-05 01:20:10.978650 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.68s 2026-03-05 01:20:10.978661 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.65s 2026-03-05 01:20:10.978672 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.86s 2026-03-05 01:20:10.978683 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.48s 2026-03-05 01:20:10.978693 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.82s 2026-03-05 01:20:10.978704 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.02s 2026-03-05 01:20:10.978715 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.83s 2026-03-05 01:20:10.978726 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.11s 2026-03-05 01:20:10.978736 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.09s 2026-03-05 01:20:10.978747 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.07s 2026-03-05 01:20:10.978758 | orchestrator | 2026-03-05 01:20:10 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:13.922309 | orchestrator | 2026-03-05 01:20:13 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:13.922444 | orchestrator | 2026-03-05 01:20:13 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:16.971559 | orchestrator | 2026-03-05 01:20:16 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:16.971672 | orchestrator | 2026-03-05 01:20:16 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:20.024364 | orchestrator | 2026-03-05 01:20:20 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:20.285323 | orchestrator | 2026-03-05 01:20:20 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:23.076496 | orchestrator | 2026-03-05 01:20:23 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:23.076598 | orchestrator | 2026-03-05 01:20:23 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:26.122125 | orchestrator | 2026-03-05 01:20:26 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:26.122213 | orchestrator | 2026-03-05 01:20:26 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:29.167353 | orchestrator | 2026-03-05 01:20:29 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:29.167519 | orchestrator | 2026-03-05 01:20:29 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:32.212642 | orchestrator | 2026-03-05 01:20:32 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:32.212730 | orchestrator | 2026-03-05 01:20:32 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:35.267691 | orchestrator | 2026-03-05 01:20:35 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:35.315715 | orchestrator | 2026-03-05 01:20:35 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:38.320336 | orchestrator | 2026-03-05 01:20:38 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:38.320470 | orchestrator | 2026-03-05 01:20:38 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:41.362116 | orchestrator | 2026-03-05 01:20:41 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:41.362191 | orchestrator | 2026-03-05 01:20:41 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:44.418766 | orchestrator | 2026-03-05 01:20:44 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:44.418847 | orchestrator | 2026-03-05 01:20:44 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:47.465808 | orchestrator | 2026-03-05 01:20:47 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:47.465919 | orchestrator | 2026-03-05 01:20:47 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:50.515318 | orchestrator | 2026-03-05 01:20:50 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:50.515488 | orchestrator | 2026-03-05 01:20:50 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:53.564661 | orchestrator | 2026-03-05 01:20:53 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state STARTED 2026-03-05 01:20:53.564763 | orchestrator | 2026-03-05 01:20:53 | INFO  | Wait 1 second(s) until the next check 2026-03-05 01:20:56.606248 | orchestrator | 2026-03-05 01:20:56 | INFO  | Task 663e881a-e133-44c9-ad50-e588bff53d62 is in state SUCCESS 2026-03-05 01:20:56.607613 | orchestrator | 2026-03-05 01:20:56.607648 | orchestrator | 2026-03-05 01:20:56.607658 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-05 01:20:56.607691 | orchestrator | 2026-03-05 01:20:56.607699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-05 01:20:56.607708 | orchestrator | Thursday 05 March 2026 01:15:40 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-03-05 01:20:56.607715 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.607724 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:20:56.607732 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:20:56.607740 | orchestrator | 2026-03-05 01:20:56.607747 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-05 01:20:56.607755 | orchestrator | Thursday 05 March 2026 01:15:41 +0000 (0:00:00.367) 0:00:00.658 ******** 2026-03-05 01:20:56.607762 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-05 01:20:56.607770 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-05 01:20:56.607777 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-05 01:20:56.607785 | orchestrator | 2026-03-05 01:20:56.607792 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-05 01:20:56.607799 | orchestrator | 2026-03-05 01:20:56.607807 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:20:56.607814 | orchestrator | Thursday 05 March 2026 01:15:41 +0000 (0:00:00.527) 0:00:01.186 ******** 2026-03-05 01:20:56.607821 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:56.607830 | orchestrator | 2026-03-05 01:20:56.607837 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-05 01:20:56.607845 | orchestrator | Thursday 05 March 2026 01:15:42 +0000 (0:00:00.626) 0:00:01.813 ******** 2026-03-05 01:20:56.607853 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-05 01:20:56.607860 | orchestrator | 2026-03-05 01:20:56.607868 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-05 01:20:56.607875 | orchestrator | Thursday 05 March 2026 01:15:46 +0000 (0:00:03.882) 0:00:05.695 ******** 2026-03-05 01:20:56.607882 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-05 01:20:56.607890 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-05 01:20:56.607897 | orchestrator | 2026-03-05 01:20:56.607904 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-05 01:20:56.607912 | orchestrator | Thursday 05 March 2026 01:15:53 +0000 (0:00:06.783) 0:00:12.478 ******** 2026-03-05 01:20:56.607919 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-05 01:20:56.607927 | orchestrator | 2026-03-05 01:20:56.607934 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-05 01:20:56.607954 | orchestrator | Thursday 05 March 2026 01:15:56 +0000 (0:00:03.433) 0:00:15.912 ******** 2026-03-05 01:20:56.607962 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-05 01:20:56.607969 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-05 01:20:56.607977 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-05 01:20:56.607985 | orchestrator | 2026-03-05 01:20:56.607993 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-05 01:20:56.608000 | orchestrator | Thursday 05 March 2026 01:16:05 +0000 (0:00:08.830) 0:00:24.743 ******** 2026-03-05 01:20:56.608007 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-05 01:20:56.608015 | orchestrator | 2026-03-05 01:20:56.608022 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-05 01:20:56.608030 | orchestrator | Thursday 05 March 2026 01:16:09 +0000 (0:00:04.000) 0:00:28.743 ******** 2026-03-05 01:20:56.608037 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-05 01:20:56.608044 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-05 01:20:56.608051 | orchestrator | 2026-03-05 01:20:56.608065 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-05 01:20:56.608072 | orchestrator | Thursday 05 March 2026 01:16:17 +0000 (0:00:08.372) 0:00:37.116 ******** 2026-03-05 01:20:56.608079 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-05 01:20:56.608087 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-05 01:20:56.608094 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-05 01:20:56.608101 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-05 01:20:56.608109 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-05 01:20:56.608116 | orchestrator | 2026-03-05 01:20:56.608123 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:20:56.608131 | orchestrator | Thursday 05 March 2026 01:16:35 +0000 (0:00:17.474) 0:00:54.591 ******** 2026-03-05 01:20:56.608138 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:56.608145 | orchestrator | 2026-03-05 01:20:56.608152 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-05 01:20:56.608160 | orchestrator | Thursday 05 March 2026 01:16:35 +0000 (0:00:00.667) 0:00:55.259 ******** 2026-03-05 01:20:56.608167 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.608174 | orchestrator | 2026-03-05 01:20:56.608182 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-05 01:20:56.608189 | orchestrator | Thursday 05 March 2026 01:16:41 +0000 (0:00:05.895) 0:01:01.155 ******** 2026-03-05 01:20:56.608197 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.608205 | orchestrator | 2026-03-05 01:20:56.608214 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-05 01:20:56.608233 | orchestrator | Thursday 05 March 2026 01:16:47 +0000 (0:00:05.277) 0:01:06.432 ******** 2026-03-05 01:20:56.608242 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.608251 | orchestrator | 2026-03-05 01:20:56.608259 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-05 01:20:56.608268 | orchestrator | Thursday 05 March 2026 01:16:51 +0000 (0:00:03.898) 0:01:10.331 ******** 2026-03-05 01:20:56.608277 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-05 01:20:56.608286 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-05 01:20:56.608294 | orchestrator | 2026-03-05 01:20:56.608303 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-05 01:20:56.608311 | orchestrator | Thursday 05 March 2026 01:17:02 +0000 (0:00:11.270) 0:01:21.602 ******** 2026-03-05 01:20:56.608320 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-05 01:20:56.608329 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-05 01:20:56.608359 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-05 01:20:56.608369 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-05 01:20:56.608378 | orchestrator | 2026-03-05 01:20:56.608386 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-05 01:20:56.608395 | orchestrator | Thursday 05 March 2026 01:17:20 +0000 (0:00:18.422) 0:01:40.024 ******** 2026-03-05 01:20:56.608404 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.608412 | orchestrator | 2026-03-05 01:20:56.608420 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-05 01:20:56.608429 | orchestrator | Thursday 05 March 2026 01:17:26 +0000 (0:00:05.703) 0:01:45.728 ******** 2026-03-05 01:20:56.608439 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.608447 | orchestrator | 2026-03-05 01:20:56.608462 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-05 01:20:56.608470 | orchestrator | Thursday 05 March 2026 01:17:32 +0000 (0:00:06.338) 0:01:52.066 ******** 2026-03-05 01:20:56.608479 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:56.608488 | orchestrator | 2026-03-05 01:20:56.608496 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-05 01:20:56.608504 | orchestrator | Thursday 05 March 2026 01:17:33 +0000 (0:00:00.253) 0:01:52.321 ******** 2026-03-05 01:20:56.608513 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.608521 | orchestrator | 2026-03-05 01:20:56.608529 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:20:56.608542 | orchestrator | Thursday 05 March 2026 01:17:38 +0000 (0:00:05.499) 0:01:57.820 ******** 2026-03-05 01:20:56.608552 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:56.608561 | orchestrator | 2026-03-05 01:20:56.608569 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-05 01:20:56.608577 | orchestrator | Thursday 05 March 2026 01:17:39 +0000 (0:00:01.153) 0:01:58.973 ******** 2026-03-05 01:20:56.608584 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.608591 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.608599 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.608606 | orchestrator | 2026-03-05 01:20:56.608613 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-05 01:20:56.608620 | orchestrator | Thursday 05 March 2026 01:17:45 +0000 (0:00:05.732) 0:02:04.705 ******** 2026-03-05 01:20:56.608627 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.608635 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.608642 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.608649 | orchestrator | 2026-03-05 01:20:56.608656 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-05 01:20:56.608664 | orchestrator | Thursday 05 March 2026 01:17:50 +0000 (0:00:05.247) 0:02:09.953 ******** 2026-03-05 01:20:56.608671 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.608678 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.608685 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.608693 | orchestrator | 2026-03-05 01:20:56.608700 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-05 01:20:56.608707 | orchestrator | Thursday 05 March 2026 01:17:51 +0000 (0:00:00.851) 0:02:10.805 ******** 2026-03-05 01:20:56.608715 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:20:56.608722 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:20:56.608729 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.608736 | orchestrator | 2026-03-05 01:20:56.608744 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-05 01:20:56.609110 | orchestrator | Thursday 05 March 2026 01:17:53 +0000 (0:00:02.477) 0:02:13.283 ******** 2026-03-05 01:20:56.609129 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.609141 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.609153 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.609166 | orchestrator | 2026-03-05 01:20:56.609179 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-05 01:20:56.609191 | orchestrator | Thursday 05 March 2026 01:17:55 +0000 (0:00:01.644) 0:02:14.927 ******** 2026-03-05 01:20:56.609201 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.609208 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.609215 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.609223 | orchestrator | 2026-03-05 01:20:56.609230 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-05 01:20:56.609237 | orchestrator | Thursday 05 March 2026 01:17:56 +0000 (0:00:01.354) 0:02:16.282 ******** 2026-03-05 01:20:56.609245 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.609252 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.609259 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.609276 | orchestrator | 2026-03-05 01:20:56.609290 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-05 01:20:56.609298 | orchestrator | Thursday 05 March 2026 01:17:59 +0000 (0:00:02.199) 0:02:18.481 ******** 2026-03-05 01:20:56.609305 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.609313 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.609320 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.609327 | orchestrator | 2026-03-05 01:20:56.609352 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-05 01:20:56.609363 | orchestrator | Thursday 05 March 2026 01:18:01 +0000 (0:00:02.084) 0:02:20.566 ******** 2026-03-05 01:20:56.609371 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.609378 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:20:56.609386 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:20:56.609393 | orchestrator | 2026-03-05 01:20:56.609400 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-05 01:20:56.609408 | orchestrator | Thursday 05 March 2026 01:18:01 +0000 (0:00:00.681) 0:02:21.248 ******** 2026-03-05 01:20:56.609415 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:20:56.609423 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.609430 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:20:56.609437 | orchestrator | 2026-03-05 01:20:56.609445 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:20:56.609452 | orchestrator | Thursday 05 March 2026 01:18:05 +0000 (0:00:03.750) 0:02:24.998 ******** 2026-03-05 01:20:56.609460 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:56.609467 | orchestrator | 2026-03-05 01:20:56.609474 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-05 01:20:56.609482 | orchestrator | Thursday 05 March 2026 01:18:06 +0000 (0:00:00.605) 0:02:25.604 ******** 2026-03-05 01:20:56.609489 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.609496 | orchestrator | 2026-03-05 01:20:56.609504 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-05 01:20:56.609511 | orchestrator | Thursday 05 March 2026 01:18:10 +0000 (0:00:04.675) 0:02:30.279 ******** 2026-03-05 01:20:56.609558 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.609568 | orchestrator | 2026-03-05 01:20:56.609575 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-05 01:20:56.609583 | orchestrator | Thursday 05 March 2026 01:18:14 +0000 (0:00:03.691) 0:02:33.970 ******** 2026-03-05 01:20:56.609590 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-05 01:20:56.609598 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-05 01:20:56.609605 | orchestrator | 2026-03-05 01:20:56.609613 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-05 01:20:56.609620 | orchestrator | Thursday 05 March 2026 01:18:22 +0000 (0:00:08.273) 0:02:42.243 ******** 2026-03-05 01:20:56.609628 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.609635 | orchestrator | 2026-03-05 01:20:56.609643 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-05 01:20:56.609656 | orchestrator | Thursday 05 March 2026 01:18:26 +0000 (0:00:03.874) 0:02:46.117 ******** 2026-03-05 01:20:56.609954 | orchestrator | ok: [testbed-node-0] 2026-03-05 01:20:56.609965 | orchestrator | ok: [testbed-node-1] 2026-03-05 01:20:56.609973 | orchestrator | ok: [testbed-node-2] 2026-03-05 01:20:56.609982 | orchestrator | 2026-03-05 01:20:56.609990 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-05 01:20:56.609999 | orchestrator | Thursday 05 March 2026 01:18:27 +0000 (0:00:00.390) 0:02:46.508 ******** 2026-03-05 01:20:56.610011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.610115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.610127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.610137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.610153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.610161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.610175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610303 | orchestrator | 2026-03-05 01:20:56.610311 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-05 01:20:56.610319 | orchestrator | Thursday 05 March 2026 01:18:29 +0000 (0:00:02.770) 0:02:49.278 ******** 2026-03-05 01:20:56.610326 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:56.610384 | orchestrator | 2026-03-05 01:20:56.610394 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-05 01:20:56.610402 | orchestrator | Thursday 05 March 2026 01:18:30 +0000 (0:00:00.130) 0:02:49.409 ******** 2026-03-05 01:20:56.610409 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:56.610416 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:56.610424 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:56.610431 | orchestrator | 2026-03-05 01:20:56.610438 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-05 01:20:56.610446 | orchestrator | Thursday 05 March 2026 01:18:30 +0000 (0:00:00.616) 0:02:50.026 ******** 2026-03-05 01:20:56.610454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:20:56.610467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:20:56.610481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:20:56.610504 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:56.610536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:20:56.610546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:20:56.610563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:20:56.610585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:20:56.610622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:20:56.610630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610638 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:56.610645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:20:56.610670 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:56.610678 | orchestrator | 2026-03-05 01:20:56.610685 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:20:56.610693 | orchestrator | Thursday 05 March 2026 01:18:31 +0000 (0:00:00.899) 0:02:50.925 ******** 2026-03-05 01:20:56.610700 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-05 01:20:56.610706 | orchestrator | 2026-03-05 01:20:56.610713 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-05 01:20:56.610719 | orchestrator | Thursday 05 March 2026 01:18:32 +0000 (0:00:00.587) 0:02:51.513 ******** 2026-03-05 01:20:56.610727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.610753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.610762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.610778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.610785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.610792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.610799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.610888 | orchestrator | 2026-03-05 01:20:56.610895 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-05 01:20:56.610902 | orchestrator | Thursday 05 March 2026 01:18:38 +0000 (0:00:06.193) 0:02:57.707 ******** 2026-03-05 01:20:56.610909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:20:56.610920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:20:56.610927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:20:56.610952 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:56.610959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:20:56.610970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:20:56.610981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.610995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:20:56.611002 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:56.611014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:20:56.611025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:20:56.611033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.611043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.611050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:20:56.611057 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:56.611064 | orchestrator | 2026-03-05 01:20:56.611070 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-05 01:20:56.611077 | orchestrator | Thursday 05 March 2026 01:18:39 +0000 (0:00:00.716) 0:02:58.423 ******** 2026-03-05 01:20:56.611085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:20:56.611096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:20:56.611109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.611116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.611128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:20:56.611135 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:56.611142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:20:56.611149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:20:56.611163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.611174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.611182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:20:56.611189 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:56.611199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-05 01:20:56.611207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-05 01:20:56.611214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.611231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-05 01:20:56.611238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-05 01:20:56.611245 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:56.611252 | orchestrator | 2026-03-05 01:20:56.611259 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-05 01:20:56.611265 | orchestrator | Thursday 05 March 2026 01:18:40 +0000 (0:00:00.909) 0:02:59.332 ******** 2026-03-05 01:20:56.611276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.611283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.611291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.611306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.611314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.611351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.611360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611440 | orchestrator | 2026-03-05 01:20:56.611447 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-05 01:20:56.611454 | orchestrator | Thursday 05 March 2026 01:18:45 +0000 (0:00:05.323) 0:03:04.656 ******** 2026-03-05 01:20:56.611466 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-05 01:20:56.611473 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-05 01:20:56.611480 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-05 01:20:56.611487 | orchestrator | 2026-03-05 01:20:56.611493 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-05 01:20:56.611500 | orchestrator | Thursday 05 March 2026 01:18:47 +0000 (0:00:02.030) 0:03:06.686 ******** 2026-03-05 01:20:56.611512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.611520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.611530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.611538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.611549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.611556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.611567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.611645 | orchestrator | 2026-03-05 01:20:56.611652 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-05 01:20:56.611659 | orchestrator | Thursday 05 March 2026 01:19:08 +0000 (0:00:20.681) 0:03:27.367 ******** 2026-03-05 01:20:56.611665 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.611672 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.611679 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.611685 | orchestrator | 2026-03-05 01:20:56.611692 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-05 01:20:56.611699 | orchestrator | Thursday 05 March 2026 01:19:09 +0000 (0:00:01.633) 0:03:29.001 ******** 2026-03-05 01:20:56.611705 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-05 01:20:56.611712 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-05 01:20:56.611719 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-05 01:20:56.611725 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-05 01:20:56.611732 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-05 01:20:56.611749 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-05 01:20:56.611756 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-05 01:20:56.611763 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-05 01:20:56.611770 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-05 01:20:56.611776 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-05 01:20:56.611783 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-05 01:20:56.611789 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-05 01:20:56.611796 | orchestrator | 2026-03-05 01:20:56.611803 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-05 01:20:56.611809 | orchestrator | Thursday 05 March 2026 01:19:15 +0000 (0:00:05.364) 0:03:34.366 ******** 2026-03-05 01:20:56.611816 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-05 01:20:56.611823 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-05 01:20:56.611829 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-05 01:20:56.611836 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-05 01:20:56.611843 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-05 01:20:56.611849 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-05 01:20:56.611856 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-05 01:20:56.611863 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-05 01:20:56.611870 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-05 01:20:56.611876 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-05 01:20:56.611884 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-05 01:20:56.611895 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-05 01:20:56.611906 | orchestrator | 2026-03-05 01:20:56.611916 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-05 01:20:56.611935 | orchestrator | Thursday 05 March 2026 01:19:23 +0000 (0:00:08.128) 0:03:42.494 ******** 2026-03-05 01:20:56.611947 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-05 01:20:56.611957 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-05 01:20:56.611967 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-05 01:20:56.611977 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-05 01:20:56.611987 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-05 01:20:56.611997 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-05 01:20:56.612008 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-05 01:20:56.612017 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-05 01:20:56.612033 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-05 01:20:56.612044 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-05 01:20:56.612054 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-05 01:20:56.612065 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-05 01:20:56.612076 | orchestrator | 2026-03-05 01:20:56.612087 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-05 01:20:56.612096 | orchestrator | Thursday 05 March 2026 01:19:29 +0000 (0:00:06.501) 0:03:48.996 ******** 2026-03-05 01:20:56.612108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.612132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.612143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-05 01:20:56.612154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.612170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.612181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-05 01:20:56.612200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.612216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.612228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.612240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.612252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.612269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-05 01:20:56.612292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.612303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.612319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-05 01:20:56.612331 | orchestrator | 2026-03-05 01:20:56.612364 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-05 01:20:56.612376 | orchestrator | Thursday 05 March 2026 01:19:33 +0000 (0:00:04.243) 0:03:53.240 ******** 2026-03-05 01:20:56.612386 | orchestrator | skipping: [testbed-node-0] 2026-03-05 01:20:56.612396 | orchestrator | skipping: [testbed-node-1] 2026-03-05 01:20:56.612407 | orchestrator | skipping: [testbed-node-2] 2026-03-05 01:20:56.612419 | orchestrator | 2026-03-05 01:20:56.612430 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-05 01:20:56.612441 | orchestrator | Thursday 05 March 2026 01:19:34 +0000 (0:00:00.369) 0:03:53.610 ******** 2026-03-05 01:20:56.612452 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612464 | orchestrator | 2026-03-05 01:20:56.612475 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-05 01:20:56.612486 | orchestrator | Thursday 05 March 2026 01:19:36 +0000 (0:00:02.279) 0:03:55.890 ******** 2026-03-05 01:20:56.612497 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612509 | orchestrator | 2026-03-05 01:20:56.612516 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-05 01:20:56.612523 | orchestrator | Thursday 05 March 2026 01:19:38 +0000 (0:00:02.320) 0:03:58.210 ******** 2026-03-05 01:20:56.612530 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612537 | orchestrator | 2026-03-05 01:20:56.612543 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-05 01:20:56.612550 | orchestrator | Thursday 05 March 2026 01:19:41 +0000 (0:00:02.481) 0:04:00.692 ******** 2026-03-05 01:20:56.612557 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612564 | orchestrator | 2026-03-05 01:20:56.612571 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-05 01:20:56.612577 | orchestrator | Thursday 05 March 2026 01:19:44 +0000 (0:00:03.014) 0:04:03.707 ******** 2026-03-05 01:20:56.612584 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612591 | orchestrator | 2026-03-05 01:20:56.612598 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-05 01:20:56.612605 | orchestrator | Thursday 05 March 2026 01:20:08 +0000 (0:00:24.042) 0:04:27.749 ******** 2026-03-05 01:20:56.612619 | orchestrator | 2026-03-05 01:20:56.612626 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-05 01:20:56.612633 | orchestrator | Thursday 05 March 2026 01:20:08 +0000 (0:00:00.073) 0:04:27.823 ******** 2026-03-05 01:20:56.612639 | orchestrator | 2026-03-05 01:20:56.612646 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-05 01:20:56.612653 | orchestrator | Thursday 05 March 2026 01:20:08 +0000 (0:00:00.090) 0:04:27.914 ******** 2026-03-05 01:20:56.612660 | orchestrator | 2026-03-05 01:20:56.612667 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-05 01:20:56.612680 | orchestrator | Thursday 05 March 2026 01:20:08 +0000 (0:00:00.079) 0:04:27.993 ******** 2026-03-05 01:20:56.612687 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612694 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.612700 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.612707 | orchestrator | 2026-03-05 01:20:56.612714 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-05 01:20:56.612721 | orchestrator | Thursday 05 March 2026 01:20:19 +0000 (0:00:11.222) 0:04:39.216 ******** 2026-03-05 01:20:56.612727 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.612734 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612741 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.612748 | orchestrator | 2026-03-05 01:20:56.612754 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-05 01:20:56.612761 | orchestrator | Thursday 05 March 2026 01:20:31 +0000 (0:00:11.595) 0:04:50.812 ******** 2026-03-05 01:20:56.612768 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.612775 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.612781 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612788 | orchestrator | 2026-03-05 01:20:56.612795 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-05 01:20:56.612801 | orchestrator | Thursday 05 March 2026 01:20:40 +0000 (0:00:09.101) 0:04:59.914 ******** 2026-03-05 01:20:56.612808 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612815 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.612821 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.612828 | orchestrator | 2026-03-05 01:20:56.612835 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-05 01:20:56.612841 | orchestrator | Thursday 05 March 2026 01:20:46 +0000 (0:00:05.428) 0:05:05.342 ******** 2026-03-05 01:20:56.612848 | orchestrator | changed: [testbed-node-2] 2026-03-05 01:20:56.612855 | orchestrator | changed: [testbed-node-1] 2026-03-05 01:20:56.612861 | orchestrator | changed: [testbed-node-0] 2026-03-05 01:20:56.612868 | orchestrator | 2026-03-05 01:20:56.612875 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:20:56.612882 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-05 01:20:56.612890 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:20:56.612897 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-05 01:20:56.612903 | orchestrator | 2026-03-05 01:20:56.612910 | orchestrator | 2026-03-05 01:20:56.612917 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:20:56.612924 | orchestrator | Thursday 05 March 2026 01:20:54 +0000 (0:00:08.675) 0:05:14.018 ******** 2026-03-05 01:20:56.612935 | orchestrator | =============================================================================== 2026-03-05 01:20:56.612942 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 24.04s 2026-03-05 01:20:56.612949 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 20.68s 2026-03-05 01:20:56.612960 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.42s 2026-03-05 01:20:56.612967 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.48s 2026-03-05 01:20:56.612974 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.60s 2026-03-05 01:20:56.612980 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.27s 2026-03-05 01:20:56.612987 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.22s 2026-03-05 01:20:56.612994 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.10s 2026-03-05 01:20:56.613000 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.83s 2026-03-05 01:20:56.613007 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.68s 2026-03-05 01:20:56.613014 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.37s 2026-03-05 01:20:56.613020 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.27s 2026-03-05 01:20:56.613027 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 8.13s 2026-03-05 01:20:56.613034 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.78s 2026-03-05 01:20:56.613041 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.50s 2026-03-05 01:20:56.613047 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.34s 2026-03-05 01:20:56.613054 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 6.19s 2026-03-05 01:20:56.613061 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.90s 2026-03-05 01:20:56.613067 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.73s 2026-03-05 01:20:56.613074 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.70s 2026-03-05 01:20:56.613081 | orchestrator | 2026-03-05 01:20:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:20:59.653879 | orchestrator | 2026-03-05 01:20:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:02.697445 | orchestrator | 2026-03-05 01:21:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:05.746410 | orchestrator | 2026-03-05 01:21:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:08.792878 | orchestrator | 2026-03-05 01:21:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:11.840111 | orchestrator | 2026-03-05 01:21:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:14.884605 | orchestrator | 2026-03-05 01:21:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:17.929006 | orchestrator | 2026-03-05 01:21:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:20.975616 | orchestrator | 2026-03-05 01:21:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:24.023610 | orchestrator | 2026-03-05 01:21:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:27.060030 | orchestrator | 2026-03-05 01:21:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:30.097043 | orchestrator | 2026-03-05 01:21:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:33.139732 | orchestrator | 2026-03-05 01:21:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:36.186464 | orchestrator | 2026-03-05 01:21:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:39.232179 | orchestrator | 2026-03-05 01:21:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:42.271706 | orchestrator | 2026-03-05 01:21:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:45.320310 | orchestrator | 2026-03-05 01:21:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:48.365809 | orchestrator | 2026-03-05 01:21:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:51.409824 | orchestrator | 2026-03-05 01:21:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:54.455015 | orchestrator | 2026-03-05 01:21:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-05 01:21:57.500440 | orchestrator | 2026-03-05 01:21:57.851578 | orchestrator | 2026-03-05 01:21:57.855572 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Mar 5 01:21:57 UTC 2026 2026-03-05 01:21:57.855637 | orchestrator | 2026-03-05 01:21:58.200260 | orchestrator | ok: Runtime: 0:38:28.143353 2026-03-05 01:21:58.467480 | 2026-03-05 01:21:58.467640 | TASK [Bootstrap services] 2026-03-05 01:21:59.238409 | orchestrator | 2026-03-05 01:21:59.238542 | orchestrator | # BOOTSTRAP 2026-03-05 01:21:59.238552 | orchestrator | 2026-03-05 01:21:59.238557 | orchestrator | + set -e 2026-03-05 01:21:59.238562 | orchestrator | + echo 2026-03-05 01:21:59.238567 | orchestrator | + echo '# BOOTSTRAP' 2026-03-05 01:21:59.238575 | orchestrator | + echo 2026-03-05 01:21:59.238596 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-05 01:21:59.248756 | orchestrator | + set -e 2026-03-05 01:21:59.248823 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-05 01:22:03.998065 | orchestrator | 2026-03-05 01:22:03 | INFO  | It takes a moment until task c20640a0-a507-4a37-9054-e2e6127ce661 (flavor-manager) has been started and output is visible here. 2026-03-05 01:22:12.536344 | orchestrator | 2026-03-05 01:22:07 | INFO  | Flavor SCS-1L-1 created 2026-03-05 01:22:12.536426 | orchestrator | 2026-03-05 01:22:07 | INFO  | Flavor SCS-1L-1-5 created 2026-03-05 01:22:12.536434 | orchestrator | 2026-03-05 01:22:08 | INFO  | Flavor SCS-1V-2 created 2026-03-05 01:22:12.536439 | orchestrator | 2026-03-05 01:22:08 | INFO  | Flavor SCS-1V-2-5 created 2026-03-05 01:22:12.536454 | orchestrator | 2026-03-05 01:22:08 | INFO  | Flavor SCS-1V-4 created 2026-03-05 01:22:12.536458 | orchestrator | 2026-03-05 01:22:08 | INFO  | Flavor SCS-1V-4-10 created 2026-03-05 01:22:12.536469 | orchestrator | 2026-03-05 01:22:08 | INFO  | Flavor SCS-1V-8 created 2026-03-05 01:22:12.536474 | orchestrator | 2026-03-05 01:22:08 | INFO  | Flavor SCS-1V-8-20 created 2026-03-05 01:22:12.536483 | orchestrator | 2026-03-05 01:22:08 | INFO  | Flavor SCS-2V-4 created 2026-03-05 01:22:12.536488 | orchestrator | 2026-03-05 01:22:09 | INFO  | Flavor SCS-2V-4-10 created 2026-03-05 01:22:12.536498 | orchestrator | 2026-03-05 01:22:09 | INFO  | Flavor SCS-2V-8 created 2026-03-05 01:22:12.536502 | orchestrator | 2026-03-05 01:22:09 | INFO  | Flavor SCS-2V-8-20 created 2026-03-05 01:22:12.536506 | orchestrator | 2026-03-05 01:22:09 | INFO  | Flavor SCS-2V-16 created 2026-03-05 01:22:12.536510 | orchestrator | 2026-03-05 01:22:09 | INFO  | Flavor SCS-2V-16-50 created 2026-03-05 01:22:12.536515 | orchestrator | 2026-03-05 01:22:10 | INFO  | Flavor SCS-4V-8 created 2026-03-05 01:22:12.536519 | orchestrator | 2026-03-05 01:22:10 | INFO  | Flavor SCS-4V-8-20 created 2026-03-05 01:22:12.536523 | orchestrator | 2026-03-05 01:22:10 | INFO  | Flavor SCS-4V-16 created 2026-03-05 01:22:12.536527 | orchestrator | 2026-03-05 01:22:10 | INFO  | Flavor SCS-4V-16-50 created 2026-03-05 01:22:12.536532 | orchestrator | 2026-03-05 01:22:10 | INFO  | Flavor SCS-4V-32 created 2026-03-05 01:22:12.536536 | orchestrator | 2026-03-05 01:22:11 | INFO  | Flavor SCS-4V-32-100 created 2026-03-05 01:22:12.536540 | orchestrator | 2026-03-05 01:22:11 | INFO  | Flavor SCS-8V-16 created 2026-03-05 01:22:12.536544 | orchestrator | 2026-03-05 01:22:11 | INFO  | Flavor SCS-8V-16-50 created 2026-03-05 01:22:12.536549 | orchestrator | 2026-03-05 01:22:11 | INFO  | Flavor SCS-8V-32 created 2026-03-05 01:22:12.536553 | orchestrator | 2026-03-05 01:22:11 | INFO  | Flavor SCS-8V-32-100 created 2026-03-05 01:22:12.536557 | orchestrator | 2026-03-05 01:22:11 | INFO  | Flavor SCS-16V-32 created 2026-03-05 01:22:12.536561 | orchestrator | 2026-03-05 01:22:11 | INFO  | Flavor SCS-16V-32-100 created 2026-03-05 01:22:12.536566 | orchestrator | 2026-03-05 01:22:12 | INFO  | Flavor SCS-2V-4-20s created 2026-03-05 01:22:12.536570 | orchestrator | 2026-03-05 01:22:12 | INFO  | Flavor SCS-4V-8-50s created 2026-03-05 01:22:12.536574 | orchestrator | 2026-03-05 01:22:12 | INFO  | Flavor SCS-8V-32-100s created 2026-03-05 01:22:14.618814 | orchestrator | 2026-03-05 01:22:14 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-05 01:22:14.677412 | orchestrator | 2026-03-05 01:22:14 | INFO  | Task ec8e5dff-75cc-4402-a761-cfff3e686b1a (bootstrap-basic) was prepared for execution. 2026-03-05 01:22:14.677471 | orchestrator | 2026-03-05 01:22:14 | INFO  | It takes a moment until task ec8e5dff-75cc-4402-a761-cfff3e686b1a (bootstrap-basic) has been started and output is visible here. 2026-03-05 01:23:02.714010 | orchestrator | 2026-03-05 01:23:02.714251 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-05 01:23:02.714274 | orchestrator | 2026-03-05 01:23:02.714286 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-05 01:23:02.714298 | orchestrator | Thursday 05 March 2026 01:22:18 +0000 (0:00:00.063) 0:00:00.063 ******** 2026-03-05 01:23:02.714310 | orchestrator | ok: [localhost] 2026-03-05 01:23:02.714323 | orchestrator | 2026-03-05 01:23:02.714334 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-05 01:23:02.714345 | orchestrator | Thursday 05 March 2026 01:22:20 +0000 (0:00:01.738) 0:00:01.802 ******** 2026-03-05 01:23:02.714356 | orchestrator | ok: [localhost] 2026-03-05 01:23:02.714367 | orchestrator | 2026-03-05 01:23:02.714379 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-05 01:23:02.714390 | orchestrator | Thursday 05 March 2026 01:22:29 +0000 (0:00:09.235) 0:00:11.038 ******** 2026-03-05 01:23:02.714401 | orchestrator | changed: [localhost] 2026-03-05 01:23:02.714413 | orchestrator | 2026-03-05 01:23:02.714424 | orchestrator | TASK [Create public network] *************************************************** 2026-03-05 01:23:02.714436 | orchestrator | Thursday 05 March 2026 01:22:37 +0000 (0:00:07.845) 0:00:18.883 ******** 2026-03-05 01:23:02.714447 | orchestrator | changed: [localhost] 2026-03-05 01:23:02.714458 | orchestrator | 2026-03-05 01:23:02.714468 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-05 01:23:02.714479 | orchestrator | Thursday 05 March 2026 01:22:43 +0000 (0:00:05.479) 0:00:24.363 ******** 2026-03-05 01:23:02.714495 | orchestrator | changed: [localhost] 2026-03-05 01:23:02.714506 | orchestrator | 2026-03-05 01:23:02.714518 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-05 01:23:02.714531 | orchestrator | Thursday 05 March 2026 01:22:49 +0000 (0:00:06.569) 0:00:30.932 ******** 2026-03-05 01:23:02.714545 | orchestrator | changed: [localhost] 2026-03-05 01:23:02.714559 | orchestrator | 2026-03-05 01:23:02.714571 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-05 01:23:02.714584 | orchestrator | Thursday 05 March 2026 01:22:54 +0000 (0:00:04.867) 0:00:35.799 ******** 2026-03-05 01:23:02.714597 | orchestrator | changed: [localhost] 2026-03-05 01:23:02.714609 | orchestrator | 2026-03-05 01:23:02.714623 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-05 01:23:02.714648 | orchestrator | Thursday 05 March 2026 01:22:58 +0000 (0:00:04.043) 0:00:39.842 ******** 2026-03-05 01:23:02.714661 | orchestrator | ok: [localhost] 2026-03-05 01:23:02.714674 | orchestrator | 2026-03-05 01:23:02.714687 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-05 01:23:02.714700 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-05 01:23:02.714713 | orchestrator | 2026-03-05 01:23:02.714726 | orchestrator | 2026-03-05 01:23:02.714739 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-05 01:23:02.714753 | orchestrator | Thursday 05 March 2026 01:23:02 +0000 (0:00:03.852) 0:00:43.695 ******** 2026-03-05 01:23:02.714766 | orchestrator | =============================================================================== 2026-03-05 01:23:02.714779 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.24s 2026-03-05 01:23:02.714792 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.85s 2026-03-05 01:23:02.714805 | orchestrator | Set public network to default ------------------------------------------- 6.57s 2026-03-05 01:23:02.714817 | orchestrator | Create public network --------------------------------------------------- 5.48s 2026-03-05 01:23:02.714854 | orchestrator | Create public subnet ---------------------------------------------------- 4.87s 2026-03-05 01:23:02.714868 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.04s 2026-03-05 01:23:02.714881 | orchestrator | Create manager role ----------------------------------------------------- 3.85s 2026-03-05 01:23:02.714894 | orchestrator | Gathering Facts --------------------------------------------------------- 1.74s 2026-03-05 01:23:05.488750 | orchestrator | 2026-03-05 01:23:05 | INFO  | It takes a moment until task 836d4daa-7968-4ed0-8f09-b4e104c475af (image-manager) has been started and output is visible here. 2026-03-05 01:23:50.586648 | orchestrator | 2026-03-05 01:23:08 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-05 01:23:50.586737 | orchestrator | 2026-03-05 01:23:08 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-05 01:23:50.586750 | orchestrator | 2026-03-05 01:23:08 | INFO  | Importing image Cirros 0.6.2 2026-03-05 01:23:50.586756 | orchestrator | 2026-03-05 01:23:08 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-05 01:23:50.586761 | orchestrator | 2026-03-05 01:23:11 | INFO  | Waiting for image to leave queued state... 2026-03-05 01:23:50.586767 | orchestrator | 2026-03-05 01:23:13 | INFO  | Waiting for import to complete... 2026-03-05 01:23:50.586771 | orchestrator | 2026-03-05 01:23:23 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-05 01:23:50.586776 | orchestrator | 2026-03-05 01:23:23 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-05 01:23:50.586780 | orchestrator | 2026-03-05 01:23:23 | INFO  | Setting internal_version = 0.6.2 2026-03-05 01:23:50.586784 | orchestrator | 2026-03-05 01:23:23 | INFO  | Setting image_original_user = cirros 2026-03-05 01:23:50.586789 | orchestrator | 2026-03-05 01:23:23 | INFO  | Adding tag os:cirros 2026-03-05 01:23:50.586793 | orchestrator | 2026-03-05 01:23:24 | INFO  | Setting property architecture: x86_64 2026-03-05 01:23:50.586797 | orchestrator | 2026-03-05 01:23:24 | INFO  | Setting property hw_disk_bus: scsi 2026-03-05 01:23:50.586801 | orchestrator | 2026-03-05 01:23:24 | INFO  | Setting property hw_rng_model: virtio 2026-03-05 01:23:50.586805 | orchestrator | 2026-03-05 01:23:25 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-05 01:23:50.586809 | orchestrator | 2026-03-05 01:23:25 | INFO  | Setting property hw_watchdog_action: reset 2026-03-05 01:23:50.586812 | orchestrator | 2026-03-05 01:23:25 | INFO  | Setting property hypervisor_type: qemu 2026-03-05 01:23:50.586816 | orchestrator | 2026-03-05 01:23:25 | INFO  | Setting property os_distro: cirros 2026-03-05 01:23:50.586820 | orchestrator | 2026-03-05 01:23:26 | INFO  | Setting property os_purpose: minimal 2026-03-05 01:23:50.586824 | orchestrator | 2026-03-05 01:23:26 | INFO  | Setting property replace_frequency: never 2026-03-05 01:23:50.586828 | orchestrator | 2026-03-05 01:23:26 | INFO  | Setting property uuid_validity: none 2026-03-05 01:23:50.586832 | orchestrator | 2026-03-05 01:23:26 | INFO  | Setting property provided_until: none 2026-03-05 01:23:50.586835 | orchestrator | 2026-03-05 01:23:27 | INFO  | Setting property image_description: Cirros 2026-03-05 01:23:50.586839 | orchestrator | 2026-03-05 01:23:27 | INFO  | Setting property image_name: Cirros 2026-03-05 01:23:50.586843 | orchestrator | 2026-03-05 01:23:27 | INFO  | Setting property internal_version: 0.6.2 2026-03-05 01:23:50.586847 | orchestrator | 2026-03-05 01:23:28 | INFO  | Setting property image_original_user: cirros 2026-03-05 01:23:50.586873 | orchestrator | 2026-03-05 01:23:28 | INFO  | Setting property os_version: 0.6.2 2026-03-05 01:23:50.586883 | orchestrator | 2026-03-05 01:23:28 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-05 01:23:50.586889 | orchestrator | 2026-03-05 01:23:29 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-05 01:23:50.586893 | orchestrator | 2026-03-05 01:23:29 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-05 01:23:50.586896 | orchestrator | 2026-03-05 01:23:29 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-05 01:23:50.586900 | orchestrator | 2026-03-05 01:23:29 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-05 01:23:50.586904 | orchestrator | 2026-03-05 01:23:29 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-05 01:23:50.586910 | orchestrator | 2026-03-05 01:23:29 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-05 01:23:50.586914 | orchestrator | 2026-03-05 01:23:29 | INFO  | Importing image Cirros 0.6.3 2026-03-05 01:23:50.586918 | orchestrator | 2026-03-05 01:23:29 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-05 01:23:50.586921 | orchestrator | 2026-03-05 01:23:31 | INFO  | Waiting for image to leave queued state... 2026-03-05 01:23:50.586931 | orchestrator | 2026-03-05 01:23:33 | INFO  | Waiting for import to complete... 2026-03-05 01:23:50.586945 | orchestrator | 2026-03-05 01:23:43 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-05 01:23:50.586949 | orchestrator | 2026-03-05 01:23:44 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-05 01:23:50.586953 | orchestrator | 2026-03-05 01:23:44 | INFO  | Setting internal_version = 0.6.3 2026-03-05 01:23:50.586957 | orchestrator | 2026-03-05 01:23:44 | INFO  | Setting image_original_user = cirros 2026-03-05 01:23:50.586961 | orchestrator | 2026-03-05 01:23:44 | INFO  | Adding tag os:cirros 2026-03-05 01:23:50.586964 | orchestrator | 2026-03-05 01:23:44 | INFO  | Setting property architecture: x86_64 2026-03-05 01:23:50.586968 | orchestrator | 2026-03-05 01:23:44 | INFO  | Setting property hw_disk_bus: scsi 2026-03-05 01:23:50.586972 | orchestrator | 2026-03-05 01:23:45 | INFO  | Setting property hw_rng_model: virtio 2026-03-05 01:23:50.586976 | orchestrator | 2026-03-05 01:23:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-05 01:23:50.586980 | orchestrator | 2026-03-05 01:23:45 | INFO  | Setting property hw_watchdog_action: reset 2026-03-05 01:23:50.586983 | orchestrator | 2026-03-05 01:23:45 | INFO  | Setting property hypervisor_type: qemu 2026-03-05 01:23:50.586988 | orchestrator | 2026-03-05 01:23:46 | INFO  | Setting property os_distro: cirros 2026-03-05 01:23:50.586992 | orchestrator | 2026-03-05 01:23:46 | INFO  | Setting property os_purpose: minimal 2026-03-05 01:23:50.586995 | orchestrator | 2026-03-05 01:23:46 | INFO  | Setting property replace_frequency: never 2026-03-05 01:23:50.586999 | orchestrator | 2026-03-05 01:23:46 | INFO  | Setting property uuid_validity: none 2026-03-05 01:23:50.587003 | orchestrator | 2026-03-05 01:23:47 | INFO  | Setting property provided_until: none 2026-03-05 01:23:50.587007 | orchestrator | 2026-03-05 01:23:47 | INFO  | Setting property image_description: Cirros 2026-03-05 01:23:50.587011 | orchestrator | 2026-03-05 01:23:47 | INFO  | Setting property image_name: Cirros 2026-03-05 01:23:50.587014 | orchestrator | 2026-03-05 01:23:47 | INFO  | Setting property internal_version: 0.6.3 2026-03-05 01:23:50.587022 | orchestrator | 2026-03-05 01:23:48 | INFO  | Setting property image_original_user: cirros 2026-03-05 01:23:50.587026 | orchestrator | 2026-03-05 01:23:48 | INFO  | Setting property os_version: 0.6.3 2026-03-05 01:23:50.587030 | orchestrator | 2026-03-05 01:23:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-05 01:23:50.587034 | orchestrator | 2026-03-05 01:23:49 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-05 01:23:50.587037 | orchestrator | 2026-03-05 01:23:49 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-05 01:23:50.587104 | orchestrator | 2026-03-05 01:23:49 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-05 01:23:50.587109 | orchestrator | 2026-03-05 01:23:49 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-05 01:23:51.024676 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-05 01:23:53.626317 | orchestrator | 2026-03-05 01:23:53 | INFO  | date: 2026-03-04 2026-03-05 01:23:53.626509 | orchestrator | 2026-03-05 01:23:53 | INFO  | image: octavia-amphora-haproxy-2024.2.20260304.qcow2 2026-03-05 01:23:53.627405 | orchestrator | 2026-03-05 01:23:53 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260304.qcow2 2026-03-05 01:23:53.627437 | orchestrator | 2026-03-05 01:23:53 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260304.qcow2.CHECKSUM 2026-03-05 01:24:52.969827 | orchestrator | 2026-03-05 01:24:52 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/8dd41a9c45fa457fb0856736771f2ffb/work/logs" 2026-03-05 01:25:24.172778 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8dd41a9c45fa457fb0856736771f2ffb/work/artifacts" 2026-03-05 01:25:24.438201 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8dd41a9c45fa457fb0856736771f2ffb/work/docs" 2026-03-05 01:25:24.457828 | 2026-03-05 01:25:24.458058 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-05 01:25:25.440889 | orchestrator | changed: .d..t...... ./ 2026-03-05 01:25:25.441514 | orchestrator | changed: All items complete 2026-03-05 01:25:25.441609 | 2026-03-05 01:25:26.225285 | orchestrator | changed: .d..t...... ./ 2026-03-05 01:25:26.957763 | orchestrator | changed: .d..t...... ./ 2026-03-05 01:25:26.985453 | 2026-03-05 01:25:26.985588 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-05 01:25:27.022421 | orchestrator | skipping: Conditional result was False 2026-03-05 01:25:27.024836 | orchestrator | skipping: Conditional result was False 2026-03-05 01:25:27.046128 | 2026-03-05 01:25:27.046225 | PLAY RECAP 2026-03-05 01:25:27.046307 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-05 01:25:27.046343 | 2026-03-05 01:25:27.183407 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-05 01:25:27.185305 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-05 01:25:27.926569 | 2026-03-05 01:25:27.926743 | PLAY [Base post] 2026-03-05 01:25:27.941898 | 2026-03-05 01:25:27.942040 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-05 01:25:28.953634 | orchestrator | changed 2026-03-05 01:25:28.963712 | 2026-03-05 01:25:28.963847 | PLAY RECAP 2026-03-05 01:25:28.963922 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-05 01:25:28.964002 | 2026-03-05 01:25:29.090803 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-05 01:25:29.093498 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-05 01:25:29.897405 | 2026-03-05 01:25:29.897583 | PLAY [Base post-logs] 2026-03-05 01:25:29.908206 | 2026-03-05 01:25:29.908362 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-05 01:25:30.374768 | localhost | changed 2026-03-05 01:25:30.392954 | 2026-03-05 01:25:30.393154 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-05 01:25:30.432825 | localhost | ok 2026-03-05 01:25:30.439487 | 2026-03-05 01:25:30.439665 | TASK [Set zuul-log-path fact] 2026-03-05 01:25:30.458196 | localhost | ok 2026-03-05 01:25:30.471842 | 2026-03-05 01:25:30.471996 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-05 01:25:30.510807 | localhost | ok 2026-03-05 01:25:30.516347 | 2026-03-05 01:25:30.516520 | TASK [upload-logs : Create log directories] 2026-03-05 01:25:31.044934 | localhost | changed 2026-03-05 01:25:31.047934 | 2026-03-05 01:25:31.048044 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-05 01:25:31.561056 | localhost -> localhost | ok: Runtime: 0:00:00.006725 2026-03-05 01:25:31.570438 | 2026-03-05 01:25:31.570623 | TASK [upload-logs : Upload logs to log server] 2026-03-05 01:25:32.169731 | localhost | Output suppressed because no_log was given 2026-03-05 01:25:32.173922 | 2026-03-05 01:25:32.174098 | LOOP [upload-logs : Compress console log and json output] 2026-03-05 01:25:32.235978 | localhost | skipping: Conditional result was False 2026-03-05 01:25:32.241597 | localhost | skipping: Conditional result was False 2026-03-05 01:25:32.256800 | 2026-03-05 01:25:32.257053 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-05 01:25:32.318806 | localhost | skipping: Conditional result was False 2026-03-05 01:25:32.319889 | 2026-03-05 01:25:32.322899 | localhost | skipping: Conditional result was False 2026-03-05 01:25:32.329033 | 2026-03-05 01:25:32.329212 | LOOP [upload-logs : Upload console log and json output]